Stills taken from animated presentation. Brief commentary added. Contact: dcolls@thoughtworks.com
I'm talking about things better not tested on humans. But I'm not talking about testing with real dummies...
I'm talking about virtual dummies. I'm talking about how simulation can make you more agile.
Here's a simulated car crash. At 60km/h the impact sends a shockwave through the frame at 20x the speed of sound. Crumple zones
absorb the energy of the impact. A rigid safety cage protects the occupants - in theory. I wouldn't even want to be a dummy in this crash!
Crash testing cars in real life is not good. But decades ago, by default, that was how it was done. When the importance of crash safety
was recognised, lab testing was introduced. Advances in computing power have made virtual testing in simulation faster and cheaper.
Could we take a similar approach to designing call centres? (Not a product in itself, but an augmentation of a family of products.)
We've all suffered trauma with call centres. Would we better testing in a lab? And faster and cheaper with simulation?
Let's look at simulation in product development.
Here's a model of product development. It places equal emphasis on building a product, measuring its performance in the marketplace,
and learning to iterate with better ideas. Iterating this cycle faster improves your products faster, generating competitive advantage.
Here's another model of product development. Perhaps it resonates with some of you.
But we'll use the lean start up model to explore the application of simulation in product development.
But what if your build path from idea to product is really long? Complex technology? Physical build? Training service staff? Organisational factors?
Or if your measure path from product to performance is really long? Difficulties distributing? Or simply taking too long to amass the data you need?
Then it's hard to learn and iterate fast. What can we do about this?
Well, you can take a shortcut. Formalise your idea into a design. Mock the design into a prototype and test it in the lab. The aim is to acquire
the same objective data - albeit with some reduction in quality - as you would from real-world testing. Then learn from the data and iterate.
Or you could take a bigger shortcut by modelling a virtual prototype and simulating its performance. Again, the
aim is to acquire objective performance data, learn and iterate on the product. You can now cycle much faster.
I'm not talking about going so far that you only produce a mental prototype and run a thought experiment. This is a very weak test and only
reflects your existing biases back at you. Simulation is effective because - if done right - it provides an objective assessment of your product.
So simulation can help if your build and measure paths are really long...
…or if they're not that long, but you need to run the cycle many times to improve your product's performance.
Now performance can be expressed in whatever terms are relevant to your business. Let's look at some examples.
The design of a car is a set of 10,000s of parameters. Everything from the geometry of the frame to the upholstery on the seats.
What's relevant to crash is the lengths, angles, sheet thickness and material of the members of the frame.
When the frame geometry is specified, we can create a virtual prototype, a mesh of 100,000s of points. We can simulate how this mesh
behaves when we crash it into an obstacle. The crash takes 30ms in real life, and 100 hours to simulate.
The simulation produces a DVD's worth of data: a 3D animation of the frame deforming. A couple of dozen parameters are relevant to crash
performance. The intrusion of the frame into the occupant space and the deceleration shock experienced by the occupants determine the trauma.
Then you can assess the performance of multiple designs and you can start to figure out how design affects performance.
Then you can start to predict how you can change the design to get the best performance.
And you might get something like this. But you wouldn't really need simulation to figure out that a tank with a spring on the front -
an extreme example of crumple zones and safety cage - is safe in a crash. But we might ask...
…how does it handle? Well, we could simulate that too, and the aerodynamics, and the life of chassis components, and the fuel economy.
You see, the answer is obvious without constraints. The real power of simulation is optimising within constraints or trade-offs.
So this is a very simplified picture of the learning process.
It looks more like this, across multiple - possibly conflicting - performance disciplines. We might need to run lots of tests to find the best, or the
most robust, designs. We might get stuck in dead ends. We might need to go back to the drawn board altogether - double-loop learning.
But enough about car crashes. Here's our call centre context. Complex business on the left. Would be nice to simplify, but that's off the table.
The project is aiming to support more customer-aligned management of agents by migrating to new routing technology, but the build will be long!
So in parallel with the build of production systems, the team starts building a simulator. The design comprises rules to identify the type of call, and rules
to target skills based on the type. Note the actual agents and their actual skills are part of the design, a human rather than computer deisgn element.
This design can be modelled into a virtual prototype. A simulated customer calls with a certain enquiry. The simulation determines
the type of call, the skills required, and then connects to a simulated agent. The simulated agent may resolve the call or transfer it.
The performance can be assessed in terms of the number of calls that are resolved first time.
And in terms of how quickly calls are answered. This is another relevant measure, but potentially in conflict with the first.
Then we can learn and iterate the design to improve performance. And early in the simulator development, we did just that.
In a game show. We asked contestants to fix a simple but poorly designed call centre. They took turns. The key learnings were the winner knew nothing
about call centres, he just observed how design determined performance, and that large scale business phenomena could be reproduced by simulation.
Note that performance could only be assessed by speed of answer in the first game. So, when first-time resolution could be simulated - more about
iterative development later - we repeated the game show. The key learning this time was that the old solutions were no longer the best designs.
So we've looked a lot at learning through simulation. Let's really firm that up.
On the left, a project without simulation. On the right, a project with simulation. Every point is an opportunity to experiment and learn.
By building a simulator in parallel with production technology, we can start to simulate (green circles) and learn very early.
Through more and more frequent simulation, we can improve the design (progressing from simple to real-world scenarios) to a point where
we're confident to run production trials with limited production technology.
We can get the production system ready early (when simulation may no longer be needed) and we can achieve a high level of performance early.
Contrast the project with no feedback opportunities - the technology is only ready right at the end, and the performance is questionable.
This is possible because the cycle time for simulation is so much faster, the resources required and the impact are minimal.
So the benefits of simulation are not in the simulation path itself, but in the ability to execute the learning path more frequently.
Simulation learnings manifest as better and more robust designs, but also in better developed organisational learning to improve products.
Learnings also manifest outside of product development in training (e.g., flight simulators) and manufacturing evidence (e.g., climate change).
But it's not all sunny weather
The things we can simulate are limited. Crowd behaviours must involve many individuals in a constrained interaction scenario.
So crash simulates physical phenomena, and call centres simulate computer systems and crowd behaviours.
We can't simulate the novel thoughts and emotions of individuals, or in situations where we have no historical pattern of behaviour. To do this, we'd need
to simulate all of our sensory and cognitive apparatus, which is some way off. IBM have however simulated a cat's brain - not sure if that helps?
So to use simulation effectively, we need to be able to decouple "mechanical" design elements from "emotional" design elements.
For example, a car's exterior and interior styling causes an emotional response that sells vehicles, while the hidden frame makes it safe.
Sometimes the weather is downright stormy. We'll examine risks, but we'll also present a development approach to manage them.
The first risk that the simulator is built wrong, either as a result of low quality (bugs) or lack of objectivity (reflecting our own biases back).
The second risk is that we can't build something we can simulate. We should build the production technology in parallel.
Third, a meta-risk. By devoting resources to building a simulation capability, we may be diverting them from fixing the root cause.
If these resources could be applied to fixing the long build and measure legs, we may not need simulation at all.
So how do we develop a simulation capability without falling foul of the limitations and risks?
Developing a simulation capability is not buying software, not even building software. It's developing an organisational capability to go from
idea to performance through simulation. Invest iteratively like technical practices, continuous delivery, learning organisation.
Let's look at how to do that.
From the outside, we just want to be able to reflect some element of design back as performance, by running an experiment in simulation.
But to simulate that experiment, we need to know how it would play out in the real world. We need to study the real world - objectively - and understand
the phenomena that determine the performance of a design. We then need to implement algorithms to reproduce those phenomena "in silico".
We then need to translate between human-understandable experiments and machine-understandable algorithms. We do all of this for one thin slice,
e.g. the answer time for agents. When that slice is done, we assess whether simulation is adding value. If so, continue. If not, pivot or stop.
We then do the next slice, e.g. the first-time resolution. We prioritise slices by their ability to reduce product risk, accelerate learning,
improve the design or reduce harm. We keep doing slices while they keep adding value (through risk reduction).
To do all this, we need a range of skills. The skills of scientists to study the world - and they must be allowed to be as objective as possible - the skills
of developers to reproduce the real world, analyse to translate from experiment to simulation and product management to drive experimentation.
We saw the importance of visualisation to car crash. I thought call centres deserved the same treatment. I'd like to share that with you.
This is an interactive animated visualisation when presented live, and simulation is compared to reality for the same period.
Leave Product Development to the Dummies
Let's review the benefits...
And then ask a series of questions to determine if simulation is right in your product development context.
First review that meta-risk, but be aware that fixing the root cause may take time, time you don't have for your product launch.
You can start iteratively building a simulation capability and assess its value…

More Related Content

DOCX
Evaluation Barbering Brief
PDF
Good-to-Great with AQUENT presentation - Koen van Niekerk
PDF
The Architecture of Uncertainty
PDF
Design Myths in Enterprise Software
PDF
GOM Verification Of Finite Element Simulations
PPTX
ANCAP Update
PPTX
Euro NCAP Update
PPTX
Mike Bertamini, Road Transportation Safety Lead Canada, Shell Canada Ltd.
Evaluation Barbering Brief
Good-to-Great with AQUENT presentation - Koen van Niekerk
The Architecture of Uncertainty
Design Myths in Enterprise Software
GOM Verification Of Finite Element Simulations
ANCAP Update
Euro NCAP Update
Mike Bertamini, Road Transportation Safety Lead Canada, Shell Canada Ltd.

Similar to Leave Product Development to the Dummies (20)

PDF
Learning simulators
PPT
Beyond reality training
PPTX
Simulation.pptx
PDF
Best Practices on Driving Design Decisions with Simulation
PDF
Simando Brochure
PDF
System modeling and simulation full notes by sushma shetty (www.vtulife.com)
DOCX
Modeling & simulation in projects
PPTX
Js Sim Strategy 2010
PPT
Ppp Of Simulation Development2
PPTX
Simulation Professional - What each module can do for me
PDF
Simulation in manufacturing - SIMANDO
PPTX
System Modeling & Simulation Introduction
PPT
Simulation Powerpoint- Lecture Notes
PPT
20121121101127simulation azmi
PDF
Simulation in logistics - SIMANDO
DOCX
Improving Decision Making Skills through Business Simulation.docx
PDF
Arena product presentation
PPSX
Simulation Strategy Show
PDF
How Design Triggers Transformation presented by Tjeerd Hoek
PPTX
Innovating new products using multiphysics modeling comsol2012-bangalore
Learning simulators
Beyond reality training
Simulation.pptx
Best Practices on Driving Design Decisions with Simulation
Simando Brochure
System modeling and simulation full notes by sushma shetty (www.vtulife.com)
Modeling & simulation in projects
Js Sim Strategy 2010
Ppp Of Simulation Development2
Simulation Professional - What each module can do for me
Simulation in manufacturing - SIMANDO
System Modeling & Simulation Introduction
Simulation Powerpoint- Lecture Notes
20121121101127simulation azmi
Simulation in logistics - SIMANDO
Improving Decision Making Skills through Business Simulation.docx
Arena product presentation
Simulation Strategy Show
How Design Triggers Transformation presented by Tjeerd Hoek
Innovating new products using multiphysics modeling comsol2012-bangalore
Ad

Recently uploaded (20)

PDF
AI/ML Infra Meetup | LLM Agents and Implementation Challenges
PDF
CCleaner 6.39.11548 Crack 2025 License Key
PDF
Cost to Outsource Software Development in 2025
PPTX
GSA Content Generator Crack (2025 Latest)
PDF
How to Make Money in the Metaverse_ Top Strategies for Beginners.pdf
PDF
Wondershare Recoverit Full Crack New Version (Latest 2025)
PPTX
Weekly report ppt - harsh dattuprasad patel.pptx
PPTX
AMADEUS TRAVEL AGENT SOFTWARE | AMADEUS TICKETING SYSTEM
PDF
How AI/LLM recommend to you ? GDG meetup 16 Aug by Fariman Guliev
PDF
Designing Intelligence for the Shop Floor.pdf
PPTX
Cybersecurity: Protecting the Digital World
PDF
Top 10 Software Development Trends to Watch in 2025 🚀.pdf
PDF
EaseUS PDF Editor Pro 6.2.0.2 Crack with License Key 2025
PPTX
Monitoring Stack: Grafana, Loki & Promtail
DOCX
Modern SharePoint Intranet Templates That Boost Employee Engagement in 2025.docx
PDF
Visual explanation of Dijkstra's Algorithm using Python
PPTX
"Secure File Sharing Solutions on AWS".pptx
PDF
Website Design Services for Small Businesses.pdf
PDF
Topaz Photo AI Crack New Download (Latest 2025)
PPTX
Computer Software - Technology and Livelihood Education
AI/ML Infra Meetup | LLM Agents and Implementation Challenges
CCleaner 6.39.11548 Crack 2025 License Key
Cost to Outsource Software Development in 2025
GSA Content Generator Crack (2025 Latest)
How to Make Money in the Metaverse_ Top Strategies for Beginners.pdf
Wondershare Recoverit Full Crack New Version (Latest 2025)
Weekly report ppt - harsh dattuprasad patel.pptx
AMADEUS TRAVEL AGENT SOFTWARE | AMADEUS TICKETING SYSTEM
How AI/LLM recommend to you ? GDG meetup 16 Aug by Fariman Guliev
Designing Intelligence for the Shop Floor.pdf
Cybersecurity: Protecting the Digital World
Top 10 Software Development Trends to Watch in 2025 🚀.pdf
EaseUS PDF Editor Pro 6.2.0.2 Crack with License Key 2025
Monitoring Stack: Grafana, Loki & Promtail
Modern SharePoint Intranet Templates That Boost Employee Engagement in 2025.docx
Visual explanation of Dijkstra's Algorithm using Python
"Secure File Sharing Solutions on AWS".pptx
Website Design Services for Small Businesses.pdf
Topaz Photo AI Crack New Download (Latest 2025)
Computer Software - Technology and Livelihood Education
Ad

Leave Product Development to the Dummies

  • 1. Stills taken from animated presentation. Brief commentary added. Contact: dcolls@thoughtworks.com
  • 2. I'm talking about things better not tested on humans. But I'm not talking about testing with real dummies...
  • 3. I'm talking about virtual dummies. I'm talking about how simulation can make you more agile.
  • 4. Here's a simulated car crash. At 60km/h the impact sends a shockwave through the frame at 20x the speed of sound. Crumple zones absorb the energy of the impact. A rigid safety cage protects the occupants - in theory. I wouldn't even want to be a dummy in this crash!
  • 5. Crash testing cars in real life is not good. But decades ago, by default, that was how it was done. When the importance of crash safety was recognised, lab testing was introduced. Advances in computing power have made virtual testing in simulation faster and cheaper.
  • 6. Could we take a similar approach to designing call centres? (Not a product in itself, but an augmentation of a family of products.) We've all suffered trauma with call centres. Would we better testing in a lab? And faster and cheaper with simulation?
  • 7. Let's look at simulation in product development.
  • 8. Here's a model of product development. It places equal emphasis on building a product, measuring its performance in the marketplace, and learning to iterate with better ideas. Iterating this cycle faster improves your products faster, generating competitive advantage.
  • 9. Here's another model of product development. Perhaps it resonates with some of you. But we'll use the lean start up model to explore the application of simulation in product development.
  • 10. But what if your build path from idea to product is really long? Complex technology? Physical build? Training service staff? Organisational factors?
  • 11. Or if your measure path from product to performance is really long? Difficulties distributing? Or simply taking too long to amass the data you need? Then it's hard to learn and iterate fast. What can we do about this?
  • 12. Well, you can take a shortcut. Formalise your idea into a design. Mock the design into a prototype and test it in the lab. The aim is to acquire the same objective data - albeit with some reduction in quality - as you would from real-world testing. Then learn from the data and iterate.
  • 13. Or you could take a bigger shortcut by modelling a virtual prototype and simulating its performance. Again, the aim is to acquire objective performance data, learn and iterate on the product. You can now cycle much faster.
  • 14. I'm not talking about going so far that you only produce a mental prototype and run a thought experiment. This is a very weak test and only reflects your existing biases back at you. Simulation is effective because - if done right - it provides an objective assessment of your product.
  • 15. So simulation can help if your build and measure paths are really long...
  • 16. …or if they're not that long, but you need to run the cycle many times to improve your product's performance. Now performance can be expressed in whatever terms are relevant to your business. Let's look at some examples.
  • 17. The design of a car is a set of 10,000s of parameters. Everything from the geometry of the frame to the upholstery on the seats. What's relevant to crash is the lengths, angles, sheet thickness and material of the members of the frame.
  • 18. When the frame geometry is specified, we can create a virtual prototype, a mesh of 100,000s of points. We can simulate how this mesh behaves when we crash it into an obstacle. The crash takes 30ms in real life, and 100 hours to simulate.
  • 19. The simulation produces a DVD's worth of data: a 3D animation of the frame deforming. A couple of dozen parameters are relevant to crash performance. The intrusion of the frame into the occupant space and the deceleration shock experienced by the occupants determine the trauma.
  • 20. Then you can assess the performance of multiple designs and you can start to figure out how design affects performance. Then you can start to predict how you can change the design to get the best performance.
  • 21. And you might get something like this. But you wouldn't really need simulation to figure out that a tank with a spring on the front - an extreme example of crumple zones and safety cage - is safe in a crash. But we might ask...
  • 22. …how does it handle? Well, we could simulate that too, and the aerodynamics, and the life of chassis components, and the fuel economy. You see, the answer is obvious without constraints. The real power of simulation is optimising within constraints or trade-offs.
  • 23. So this is a very simplified picture of the learning process.
  • 24. It looks more like this, across multiple - possibly conflicting - performance disciplines. We might need to run lots of tests to find the best, or the most robust, designs. We might get stuck in dead ends. We might need to go back to the drawn board altogether - double-loop learning.
  • 25. But enough about car crashes. Here's our call centre context. Complex business on the left. Would be nice to simplify, but that's off the table. The project is aiming to support more customer-aligned management of agents by migrating to new routing technology, but the build will be long!
  • 26. So in parallel with the build of production systems, the team starts building a simulator. The design comprises rules to identify the type of call, and rules to target skills based on the type. Note the actual agents and their actual skills are part of the design, a human rather than computer deisgn element.
  • 27. This design can be modelled into a virtual prototype. A simulated customer calls with a certain enquiry. The simulation determines the type of call, the skills required, and then connects to a simulated agent. The simulated agent may resolve the call or transfer it.
  • 28. The performance can be assessed in terms of the number of calls that are resolved first time.
  • 29. And in terms of how quickly calls are answered. This is another relevant measure, but potentially in conflict with the first.
  • 30. Then we can learn and iterate the design to improve performance. And early in the simulator development, we did just that.
  • 31. In a game show. We asked contestants to fix a simple but poorly designed call centre. They took turns. The key learnings were the winner knew nothing about call centres, he just observed how design determined performance, and that large scale business phenomena could be reproduced by simulation.
  • 32. Note that performance could only be assessed by speed of answer in the first game. So, when first-time resolution could be simulated - more about iterative development later - we repeated the game show. The key learning this time was that the old solutions were no longer the best designs.
  • 33. So we've looked a lot at learning through simulation. Let's really firm that up.
  • 34. On the left, a project without simulation. On the right, a project with simulation. Every point is an opportunity to experiment and learn. By building a simulator in parallel with production technology, we can start to simulate (green circles) and learn very early.
  • 35. Through more and more frequent simulation, we can improve the design (progressing from simple to real-world scenarios) to a point where we're confident to run production trials with limited production technology.
  • 36. We can get the production system ready early (when simulation may no longer be needed) and we can achieve a high level of performance early. Contrast the project with no feedback opportunities - the technology is only ready right at the end, and the performance is questionable.
  • 37. This is possible because the cycle time for simulation is so much faster, the resources required and the impact are minimal.
  • 38. So the benefits of simulation are not in the simulation path itself, but in the ability to execute the learning path more frequently.
  • 39. Simulation learnings manifest as better and more robust designs, but also in better developed organisational learning to improve products. Learnings also manifest outside of product development in training (e.g., flight simulators) and manufacturing evidence (e.g., climate change).
  • 40. But it's not all sunny weather
  • 41. The things we can simulate are limited. Crowd behaviours must involve many individuals in a constrained interaction scenario.
  • 42. So crash simulates physical phenomena, and call centres simulate computer systems and crowd behaviours.
  • 43. We can't simulate the novel thoughts and emotions of individuals, or in situations where we have no historical pattern of behaviour. To do this, we'd need to simulate all of our sensory and cognitive apparatus, which is some way off. IBM have however simulated a cat's brain - not sure if that helps?
  • 44. So to use simulation effectively, we need to be able to decouple "mechanical" design elements from "emotional" design elements. For example, a car's exterior and interior styling causes an emotional response that sells vehicles, while the hidden frame makes it safe.
  • 45. Sometimes the weather is downright stormy. We'll examine risks, but we'll also present a development approach to manage them.
  • 46. The first risk that the simulator is built wrong, either as a result of low quality (bugs) or lack of objectivity (reflecting our own biases back).
  • 47. The second risk is that we can't build something we can simulate. We should build the production technology in parallel.
  • 48. Third, a meta-risk. By devoting resources to building a simulation capability, we may be diverting them from fixing the root cause.
  • 49. If these resources could be applied to fixing the long build and measure legs, we may not need simulation at all.
  • 50. So how do we develop a simulation capability without falling foul of the limitations and risks?
  • 51. Developing a simulation capability is not buying software, not even building software. It's developing an organisational capability to go from idea to performance through simulation. Invest iteratively like technical practices, continuous delivery, learning organisation.
  • 52. Let's look at how to do that.
  • 53. From the outside, we just want to be able to reflect some element of design back as performance, by running an experiment in simulation.
  • 54. But to simulate that experiment, we need to know how it would play out in the real world. We need to study the real world - objectively - and understand the phenomena that determine the performance of a design. We then need to implement algorithms to reproduce those phenomena "in silico".
  • 55. We then need to translate between human-understandable experiments and machine-understandable algorithms. We do all of this for one thin slice, e.g. the answer time for agents. When that slice is done, we assess whether simulation is adding value. If so, continue. If not, pivot or stop.
  • 56. We then do the next slice, e.g. the first-time resolution. We prioritise slices by their ability to reduce product risk, accelerate learning, improve the design or reduce harm. We keep doing slices while they keep adding value (through risk reduction).
  • 57. To do all this, we need a range of skills. The skills of scientists to study the world - and they must be allowed to be as objective as possible - the skills of developers to reproduce the real world, analyse to translate from experiment to simulation and product management to drive experimentation.
  • 58. We saw the importance of visualisation to car crash. I thought call centres deserved the same treatment. I'd like to share that with you.
  • 59. This is an interactive animated visualisation when presented live, and simulation is compared to reality for the same period.
  • 61. Let's review the benefits...
  • 62. And then ask a series of questions to determine if simulation is right in your product development context.
  • 63. First review that meta-risk, but be aware that fixing the root cause may take time, time you don't have for your product launch. You can start iteratively building a simulation capability and assess its value…