1. Chapter 4: Getting Started with Our First
Digital Twin
In the previous chapter, we looked at the different organizational perspectives
when deciding on your first Digital Twin candidate. The size and type of your
company have an impact on your perspective and the specific value drivers that
will be important to you in the selection process. We identified the importance of
clear roles and responsibilities when embarking on a Digital Twin program. We
also described the process of experimentation to determine the finalist for your
first Digital Twin.
We also decided on the finalist in our selection process to identify the candidate
for our Digital Twin. You should now have a clear idea of the type of Digital Twin
that would address your specific organization's needs. We chose a specific finalist
in the previous chapter to use as an example for the remainder of this book, but
this approach can be applied to any Digital Twin that you may want to start with.
However, we recommend that you follow the example as we build it out in the
next few chapters.
This chapter will cover the planning framework, how to validate the problem
statement and expected outcomes, and the proposed business process for
developing your Digital Twin. Finally, we will address some of the technical
considerations of and approaches to digital platform selection.
We will cover the following topics in this chapter:
Planning framework
Validating the problem statement and outcomes
Exploring the business process for Digital Twin development
Factoring in technology considerations
Exploring digital platforms for Digital Twins
Let's start with a planning framework for your first Digital Twin that builds on
the methodologies described in the previous chapter.
Planning framework
2. Chapter 3, Identifying the First Digital Twin, described an agile development
process, which is the approach we suggest when building your first Digital Twin.
Agile methods allow you to make quick course corrections as Digital Twins may
be new to you and your organization. You probably won't have precise design
specifications.
However, it remains essential to follow a structured planning process, even if you
are using an agile development methodology. This section will describe different
planning perspectives that are important to consider when getting starting with
your first Digital Twin.
A project planning framework provides guidance on the different project phases
for developing your first Digital Twin. The planning framework for our first
Digital Twin has been tailored to be agile as the technical and business impact of
Digital Twins is unknown here.
Project planning framework
It is vital to clearly outline what is expected from each stakeholder during the
first Digital Twin development. We covered the RACI diagram in Chapter
3, Identifying the First Digital Twin, as well as the different RACI roles, depending
on the type of organization you are building your first Digital Twin for.
The following diagram shows a project and planning framework for a typical
predictive maintenance Digital Twin in the business unit of a large enterprise
organization. The five high-level phases apply to any use case or industry when
developing your first Digital Twin. Each phase's content and approach may vary
slightly, depending on whether this Digital Twin performs a predictive
maintenance function, operational monitoring, simulation, or any other
specialized capability:
3. Figure 4.1 – Project planning framework for a predictive maintenance Digital
Twin
We distinguish between the line of business (LoB) and information
technology (IT) roles during these phases while focusing on a key aspect of
the Digital Twin's development. The LoB function focuses on the business
challenge and engineering analytics to address it. It is also concerned with the
operational business process that needs to adapt to support the Digital Twin
technology. The IT function focuses on the digital enabling technology that's used
to create and operate the Digital Twin throughout its life cycle.
Phase 0 – the pre-project phase
The pre-project phase, as shown in the preceding table, refers to the preparation
work that needs to be done, though this is not necessarily part of a Digital Twin
project. Reliability engineers, operations managers, and other LoB users often
conduct business performance analyses to determine areas to focus on for future
efficiency and effectiveness improvements. A bad actor analysis using lean first
principles is a typical approach that's used by reliability engineers, as shown in
this predictive maintenance example. Bad actor analysis is the formal review
process of a plant or factory's operating assets. It uses the Pareto principle where,
in the case of equipment failures, 20% of the equipment is typically responsible for
80% of failures. The Pareto principle is often referred to as the 80/20 rule and is
used in many industries and applications. The objective of bad actor analysis is to
find the 20% of the equipment that causes the most downtime or loss of production
4. and rank them. One ranking mechanism that's used in bad actor analysis is to
apply the same Pareto principle to the 20%. It finds the 20% that is responsible for
80% of the original 80% of failures. This means that we now have 4% of the original
assessment, which is responsible for 64% percent of failures or downtime.
The LoB function analyzes the overall production value chain and identifies
potential system bottlenecks and specific assets or systems, which are referred
to as bad actors. As mentioned previously, these bad actors often produce the
majority of equipment failure incidents. It is a common practice to rank the bad
actors in terms of key performance metrics such as throughput or production
losses, downtime hours, repair cost, and safety. For a manufacturing line, the main
bad actor may be the conveyor system, the second-ranking bad actor may be the
robotic assembly arm, and the third may be the wrapping station at the end of the
line.
The next step, as shown in the following diagram, is to determine the main failure
modes of these top-ranking bad actors. Formal Failure Mode Effect and
Criticality Analysis (FMECA) is a well-established practice in reliability
engineering. For our first Digital Twin, however, we do not require a full-scale
FMECA. It is still essential to identify key failure modes based on historical
maintenance and operational data, as this ensures that the prototype Digital Twin
will demonstrate value during a short duration project validation phase:
5. Figure 4.2 – Project planning framework for the predictive maintenance Digital
Twin
Once we understand the primary failure modes, we can identify the root causes
of these failure modes. This is also a well-established practice in reliability
engineering. Still, we will simply identify the leading root causes for the most
predominant failure modes for this book. Due to the cause-and-effect relationship
that we often see with physical equipment, some root causes may result in
multiple failure modes.
The following diagram shows that Root Cause 2 can create Failure mode
1 and Failure Mode 3. The business impact and nature of the bad actor failure
modes will determine the level of analysis, but for this example, we will assume
that the LoB function will provide the necessary input for this:
Figure 4.3 – Relationship of root causes and failure modes
The objective of understanding the root causes is to determine if we can identify
any leading indicators that could be supported in a Digital Twin, through real-
time data and analytics from IoT and sensor devices. These leading indicators can
be based on raw sensor data, physics models, and mathematical or statistical
models, as described in Chapter 1, Introduction to Digital Twin.
6. These real-time indicators can be established from the Internet of
Things (IoT), Operational Technology (OT), and enterprise business systems
such as Enterprise Asset Management (EAM), Enterprise Resource
Planning (ERP), and Manufacturing Execution System (MES) solutions.
This integration requirement to multiple different systems introduces the role of
the IT and development responsibilities given in Figure 4.1. This is the ideal
opportunity to establish a high-level reference architecture that can be used for
the first Digital Twin, and then adapted in the future based on the outcomes and
learning from this initial project.
It is also recommended to start the governance processes around security and
trustworthiness at these early stages and install the discipline for future projects.
Digital Twins introduce several new security vulnerabilities and potential attack
surfaces. The IT function can use the initial Digital Twin project to assess the
impact and identify risk mitigation strategies.
Phase 1 – the project scoping phase
During the project scoping phase, the business and operations team identify and
confirm that the lead indicators for the high-priority bad actor assets have
associated data sources. It is still important to start with the failure modes and
root causes and not with the available data sources. Your first Digital Twin
solution should focus on delivering value quickly and, as such, be problem-
oriented.
Business and operations teams should complete a business readiness assessment,
as outlined in Figure 1.10 of Chapter 1, Introduction to Digital Twin. Prioritize use
cases based on technical readiness and business impact to identify the first
Digital Twin. The business impact measures also relate to the value at
stake metrics, which we identified in Figure 1.9 of Chapter 1, Introduction to
Digital Twin. The potential business impact measures from the assessment we
provided in Figure 1.10 of Chapter 1, Introduction to Digital Twin, include safety,
downtime, throughput, quality, and cost. The technical readiness assessment
covers the availability of data, the maturity of automation and IT systems,
analytics, the proposed deployment environment, and the project management
maturity level.
7. This phase also includes defining the value at stake and the key success criteria
for the first Digital Twin. It answers the question, what does success look like for
the first Digital Twin?
For IT teams, this phase provides the opportunity to assess the data availability
and data quality for the lead indicators that the business team identified. Data
integration, access, cleanup, and data wrangling can consume more than 50% of
project resources and costs for IoT-based projects, based on research by leading
analyst firms. The project scoping phase provides the IT team with the
opportunity to assess the impact data integration has on the key success metrics
of the Digital Twin project.
This is also the phase where we prepare the Digital Twin platform and configure
the required data integration connectors within the necessary governance
guardrails. Preparing this in advance reduces the risk of creating a Digital Twin
that is dependent on data access and may not be available for integration due to
technical reasons.
During this phase, a final key point is to ensure that business users and IT are
aligned on the Critical Success Factors (CSFs) for the first Digital Twin project. A
common and shared vision around the specific measurable outcomes or CSFs for
the first Digital Twin will ensure a results-based focus for the project.
The next phase focuses on developing and delivering the Digital Twin while
preparing the organization to adapt to operational business processes and the
way people work.
Phase 2 – the project design and development phase
Digitally enabled projects typically require a change in how people go about
routine tasks. Digital Twins, as well as the operational, situational awareness, and
decision support that they provide, change the business processes that people
traditionally follow. We will cover this topic in more depth later in this chapter.
To ensure that the first Digital Twin is a success, it is essential to identify the team
among business users that's willing to embrace technology-based solutions to
address business challenges. Operational users in industrial organizations are
typically more conservative and skeptical of new or emerging technologies. This
8. is also the phase during the project where a decision is made on what kind of
Digital Twin will be built and how it will integrate with design, manufacturing,
maintenance, and operational models. Furthermore, the design and development
phase provides the engineering team with the opportunity to add additional
sensors and data collection points for testing and certification. Substantial
planning is involved in the placement and data collection plans for sensor data.
Business users bring about the required engineering knowledge and expertise to
help plan how to sustain the business value of twins. Selecting the right team will
improve your chances of a successful first Digital Twin project.
The IT team will develop or configure the Digital Twin during the design and
development phase. This includes integrating real-time input and other metadata
and integration to backend business systems. We recommend an agile
development approach, as described in Chapter 3, Identifying the First Digital
Twin, using all the artifacts and processes shown in Figure 3.8.
The verification steps during the design and development phase provide the
necessary governance to ensure the Digital Twin is designed and engineered
correctly.
We also recommend that the Digital Twin is configured so that it can
automatically track use case results continuously. The initial Digital Twin project
is typically used to demonstrate the value based on the CSFs. Automating the
reporting on these assists with value tracking during the project validation phase.
Phase 3 – the project validation phase
The project validation phase focuses on measuring the key success criteria set out
during the project scope and validating the result of using the Digital Twin. The
business team users need to ensure that the business process changes, as
described in phase 2, are implemented during the validation phase.
During the business validation process, both the business and IT teams can use
continuous CSF monitoring results to improve and fine-tune the Digital Twin's
capabilities. This phase will also allow you to understand the support
requirements and other business capabilities required to maintain and scale out
Digital Twins in the organization.
9. This phase's outcome is to decide if the Digital Twin has delivered on the initial
CSFs and if it is to be continued in production, or if it is the end of the initial
assessment. If we have followed the preceding steps, the likelihood of success is
exceptionally high, and this will often lead to scaling out the Digital Twin in full-
scale production applications.
Phase 4 – the project scaling phase
New requirements and opportunities often emerge during the project validation
phase as business users see the benefit of the improved decision support
capability. Scaling out could mean providing access to additional users or adding
other capabilities and features to address these additional requirements.
The project planning framework used in this example is specific to a predictive
maintenance case. Still, the principle applies to any Digital Twin product, and we
recommend that you outline the planning framework for your project in a
similar way. The single-page summary improves internal communication and
provides a clear understanding of the expectations of the different phases.
The project planning framework is supported by a solution framework that
explains the business value and scope to business executives who sponsor the
Digital Twin project.
Solution planning framework
The proposed solution planning framework has its roots in the Lean
Startup approach, which was introduced by Steve Blank and popularized
by Eric Ries (https://hbr.org/2013/05/why-the-lean-start-up-changes-
everything ).
The Lean Digital Twin is a methodology developed by XMPro (https://bit.ly/idt-ldt)
and is based on the Lean Startup framework, which focuses on achieving the
product/market fit of a new product before scaling out.
The Lean Digital Twin is an ideal approach when developing your first Digital
Twin since the application and use of Digital Twins in organizations is not well
defined or understood yet:
10. Figure 4.4 – Moving from the Problem/Solution to the Digital Twin/Business Fit
The first part of the lean Digital Twin approach focuses on the Problem/Solution
Fit, and the best way to describe this in a simple and easy-to-understand way is
by using the Lean Digital Twin Canvas.
This is based on the lean canvas that's used in the Lean Startup approach, which
describes the business problem, solution integration points, and the business case
on a single page that is easy to communicate to a project team and executive
sponsors:
Figure 4.5 – The lean Digital Twin canvas for the slurry pump predictive
maintenance Digital Twin
11. The numbers in the preceding diagram indicate the sequence for completing the
canvas during a workshop with the business and IT teams:
1. Problem: Describe the top three problems that the first Digital Twin will
address based on the prioritization matrix described in Chapter
1, Introduction to Digital Twin.
2. Customer Segments: Who are the stakeholders and business users that
will benefit from the first Digital Twin solution?
3. Digital Twin UVP: What makes this Digital Twin different from what
you are already doing?
4. Solution: What are the top three features that will deliver the key
capabilities of the Digital Twin (AI, real time, decision support, and so
on)?
5. External Challenges: What are the external red flags for the Digital
Twin (security, data access, connectivity, and so on)?
6. ROI Business Case: How will this Digital Twin deliver the planned
value
at stake, as described in Chapter 1, Introduction to Digital Twin?
7. Key Metrics: How will the Digital Twin be measured quantitatively?
8. Integration: What are the critical integrations required to make it
work?
9. Costing: What is the projected cost of developing and operating
the
Digital Twin?
The preceding diagram shows a complete canvas for a slurry pump as part of a
predictive maintenance Digital Twin in an industrial mining company. One of
this approach's main benefits is that the canvas provides a single-page view of all
the critical aspects of interest to executive decision-makers. The template for the
canvas is available to download at https://bit.ly/idt-ldtc.
The Lean Digital Twin canvas is ideal as a solution planning framework for your
first Digital Twin, since it ensures that you have documented the problem
statement and visibly clarified the expected outcomes.
Validating your problem statement and assumptions about your first Digital Twin
outcomes is a crucial step that should not be overlooked. The first project often
creates the lasting perception of a new approach such as Digital Twins in your
organization. The second block in Figure 4.4 describes the approach, similar to
the Product/Market validation in the Lean Startup approach. The adapted version
of the Lean Digital Twin refers to the Digital Twin/Business Fit, which is used to
validate the problem statement and check the expected outcomes.
12. In this section, we proposed the Lean Digital Twin Canvas as a solution
framework for planning the business validation and impact of your first Digital
Twin. It is based on the Lean Startup approach, which emphasizes validated
learning, and a key aspect of that is validating that we are solving the right
problem to deliver the right outcome. We will cover how to validate the problem
statement and expected outcomes next.
Validating the problem statement and
outcomes
Reviewing the problem statement and expected outcomes is part of the initial
phases' validated learning focus, as shown in Figure 4.4. It provides us with the
opportunity to iterate and pivot toward a successful project continuously.
It is essential to validate the problem that you are solving and the expectation of
the business outcomes during each phase of the development life cycle (Figure
4.1) of your Digital Twin prototype. The easiest way to do this is to use the lean
Digital Twin canvas in a formal review workshop at the end of each phase. You
can use this as a checkpoint to ensure that all the stakeholders are still aligned
with both the problem and the expected business outcomes.
You should update the canvas with a new version for each phase to provide
valuable insights at the end of the project. By doing this, you will be able to
evaluate the evolution of the problem statement on the expected business
outcomes over the development life cycle. It is a handy tool to present to
executives to show the Digital Twin's evolution and development – not just of the
technical aspects, but also the business considerations.
Exploring the business process for Digital
Twin development
It is essential to define the changes to the business processes and ensure that end
users are trained to maximize these new insights from the Digital Twins. We
mentioned this in the Phase 2 – the project design and development phase section,
13. but this impact must be considered early on during the Digital Twin project's life
cycle:
Figure 4.6 – Example of business process changes based on Digital Twin inputs
The preceding diagram describes the business process impact of a Digital Twin in
the predictive maintenance example for slurry pumps in a mine. This example
demonstrates the interaction between the Digital Twin, reliability engineering
teams, maintenance planning teams, and the maintenance crews.
It is the responsibility of the Digital Twin delivery team to ensure that existing
processes are changed to include new ways of working, especially when it
impacts operations and business users that are not working with digital
technology solutions regularly.
This may require a formal business process review for a larger-scale project, but
we suggest that you map out simple process diagrams for your first Digital Twin,
similar to the example shown in the preceding diagram. This will improve the
collaboration and communication between various stakeholders and make the
changes visible to all the process participants. It is important to review these
changes during each phase, but specifically during the validation phase to ensure
that the proposed new process improves the overall experience:
14. Figure 4.7 – End-to-end business process initiated by a Digital Twin of a pump
The preceding diagram shows a simple, end-to-end business process for a pump
where the Digital Twin is receiving real-time data from the physical pump. When
the Digital Twin predicts a potential failure, it sends a message to a service
technician to initiate the repair. The Digital Twin can create the corresponding
work order in the business system, such as an ERP.
Changes to business processes are not the only consideration for your first Digital
Twin. Technology decisions that you make at this point may influence the
outcome and scale at which you can deploy future projects. Let's address some of
those technological considerations.
Factoring in technology considerations
Now that we have a clear understanding of the problem that we want to address
with our first Digital Twin, the expected outcomes, the different project phases in
a lean and agile development cycle, and the business processes required to
support it, we need to address the technical considerations for the first Digital
Twin.
To standardize on Digital Twin definitions, interoperability, and how to interact
with these Digital Twins, various organizations are working on technology
standards to address these challenges. Two notable projects in this area are
the Asset Administration Shell (AAS), developed by Plattform Industrie
4.0 (https://bit.ly/idt-aas), and the Digital Twin Definition Language (DTDL),
which is sponsored as an open source initiative by Microsoft (https://bit.ly/idt-
dtdl). In addition to these, there are also standards frameworks for Digital Twins
in manufacturing in current development, such as "The Digital Twin
Manufacturing Framework," which will be published as ISO 23247.
15. Both the AAS and DTDL initiatives focus on technically describing and
instantiating Digital Twins and have a significant technological impact. At the
time of writing, both of these are still developing standards, without sufficient
details to help us create fully operational, standalone Digital Twins. Deciding on
either of these, or perhaps even your own proprietary approach, is a crucial
architectural decision that's influenced by the technology stack in your
organization and the level of sophistication that you require in the short and
medium term. The benefit of creating your first Digital Twin as a minimum viable
product also allows you to test these approaches before deciding on a standard
for your business.
To decide on which standard you should use for your first industrial Digital Twin,
we will look at the two emerging standards we mentioned previously (AAS and
DTDL) at a high level.
Asset Administration Shell
The Asset Administration Shell (AAS) is the implementation of the Digital Twin
for Plattform Industrie 4.0, a network of companies, associations, trade unions,
scientific organizations, and governmental entities in Germany. It is developed by
the working group for "Reference Architectures, Standards, and Norms" (WG1) of
Plattform Industrie 4.0 (https://bit.ly/idt-wgI40):
16. Figure 4.8 – High-level metamodel of AAS: https://bit.ly/idt-zvei-aas
One of the reasons for considering AAS for your Digital Twin architecture is the
potential library of Digital Twins from product manufacturers that can be used in
assembling a composite Digital Twin.
AAS is predominantly associated with the Industrie 4.0 movement, with most
activity from European manufacturers and their customers. The technology
consideration is primarily around the interoperability with other AAS-based asset
Digital Twins.
It is not in this book's scope to provide a full technical evaluation of AAS, but it is
a critical technical consideration for standardizing your Digital Twin
development in the future. The official technical information is available
at https://bit.ly/idt-zvei-aas, but we will cover some of the essential decisions for
your first Digital Twin here.
One of the technical considerations for Digital Twin development is the format
you use to define, create, store, and operate the Digital Twin information models.
Standardizing on these efforts will help with interoperability among Digital
17. Twins. It will also reduce the integration effort and make the reuse of models
easier to achieve.
Physical assets are central to AAS. The framework is designed to cater to assets,
components, information, and sub-models that establish a product hierarchy,
somewhat similar to a bill of materials. This is a useful construct for a Digital
Twin information model since it needs to operate in the broader Industrie 4.0
ecosystem of suppliers and consumers. It provides a shared understanding of an
asset in a machine-readable format, and AAS is a metamodel description of assets
and their related data. There are data specification templates for defining concept
descriptions for properties and physical units in the AAS framework.
The following AAS serializations and mappings are currently offered. We have
also specified their typical use cases:
XML and JSON for exchange between partners via the .aasx exchange
format
Resource Description Framework (RDF) for reasoning
AutomationML for the engineering phase
The OPC unified architecture (OPC UA) for the operation phase
Serialization follows a standardized structure that helps improve collaboration
and interoperability. The following diagram shows the metamodel of an asset in
the AAS structured approach:
Figure 4.9 – Metamodel of an asset in the AAS structure
18. The following snippet of XML code shows the structure of defining the asset and
its component or sub-model hierarchy in a machine-readable format:
. . .
<aas:assetAdministrationShells>
<aas:assetAdministrationShell>
<aas:idShort>ExampleMotor</aas:idShort>
<aas:category>CONSTANT</aas:category>
<aas:identification
idType="URI">http://customer.com/ aas/9175_7013_7091_9168</
aas:i dentification>
<aas:assetRef>
<aas:keys>
<aas:key type="Asset"
local="true" idType="URI">
http://customer.com/assets/KHBVZJS QKIY </aas:key>
</aas:keys>
</aas:assetRef>
<aas:submodelRefs>
<aas:submodelRef>
<aas:keys>
<aas:key type="Submodel"
local="true" idType="URI">http://i40.customer.com/
type/1/1/1A7B62B529F19152</aas:key>
</aas:keys>
</aas:submodelRef>
</aas:submodelRefs>
<aas:conceptDictionaries />
</aas:assetAdministrationShell>
</aas:assetAdministrationShells>
. . .
This serialization can, in turn, be used to interact with the Digital Twin at an
integration level, as well as the visual user representation of an asset, based on
the requirements of the use case for the Digital Twin. Open source developer
tools have been made available by the Industrie 4.0 community, and the AAS
Explorer is a current project whose source code is available at https://bit.ly/idt-
aasx.
Some Digital Twin-enabling technology vendors provide out-of-the-box support
for AAS with visualization and data integration capabilities. The following
screenshot shows the implementation of a Digital Twin for a smart factory robot
in AAS in a commercial application:
19. Figure 4.10 – Example of a robotic arm Digital Twin in the AAS definition for a
smart factory
Other vendor examples can be found at http://www.i40-aas.de.
Another technical consideration is around a standardized data model being
deployed and managed in a single-technology environment. This approach makes
sense when you are standardizing on a technology stack from a cloud solutions
provider. The Microsoft DTDL open source initiative is an approach that supports
this technical consideration.
Digital Twins Definition Language (DTDL)
Through its open source initiative, Microsoft developed the Digital Twins
Definition Language (DTDL) as a language for describing models that include
IoT devices, device Digital Twins, and asset Digital Twins. Device Digital Twins is
the digital representation of a sensor device and includes device information
such as battery level and connection quality, which is not normally associated
with asset Digital Twins. DTDL uses a variation of JSON, namely JSON-LD, which
is designed to be used as JSON or in Resource Description Framework (RDF)
systems.
20. DTDL consists of a set of metamodel classes in a similar approach to AAS. Six
metamodel classes are used to define the behavior of DTDL-based Digital Twins:
Interface
Telemetry
Property
Command
Relationship
Component
These metamodel classes can be implemented using a Software Development
Kit (SDK). More technical information on the open source implementation of
DTDL and these six classes is available at https://bit.ly/idt-dtdlv2.
Note that DTDL can only be deployed on the Azure Digital Twins service, which
is available in the Microsoft Azure Cloud at the time of writing this book.
Organizations that standardize their technology stack on Azure and Azure
Services may prefer to use DTDL for their Digital Twin solution deployment.
Visit https://bit.ly/idt-adts for more information on Azure Digital Twins.
DTDL defines semantic relationships between entities to connect Digital Twins to
a knowledge graph that reflects their interactions. It supports model inheritance
to create specialized Digital Twins.
The Digital Twin knowledge graph in Azure Digital Twins can be visualized with
Azure Digital Twins Explorer (https://bit.ly/idt-dtdlx), which shows the
relationship between different Digital Twin models. It is a sample application that
demonstrates how you can do the following:
Upload and explore the models and graphs of DTDL-based Digital Twins.
Visualize the Digital Twin graph with several layouts.
Edit the properties of DTDL Digital Twins and run queries against the
graph.
The following screenshot shows an example of such a knowledge graph based
on
the DTDL model of a composite Digital Twin. A description of this example is
available at https://bit.ly/idt-dtdlx:
21. Figure 4.11 – DTDL-based Digital Twin graph in the Azure Digital Twins service
DTDL is currently less complicated than AAS, but it is also limited in its scope and
capability. It does not store historical data on the Digital Twin as it only records
the current state. If the temperature input from a sensor changes, the current
value is overwritten with the new value.
Users of the Azure Digital Twins service with DTDL typically store temporal data
in a time series database, and then they use the DTDL asset identifiers and
properties to create a historical reference for analysis purposes. This can be done
by a developer in Microsoft Visual Studio, or by subject matter experts, such as
engineers, in a low-code Digital Twin platform with integration connectors,
which provides access to both the Azure Digital Twin and the time series
database. An example of an integration connector can be seen in Figure 4.12.
These are important technical considerations when deciding on the Digital Twin-
enabling technologies that will support your first Digital Twin development. Will
the Digital Twin primarily be developed and used by software developers, or is it
aimed at business users to create and maintain Digital Twins in your
organization? Both are technically feasible options but will require different
technological capabilities to support the Digital Twin project's objective.
22. Here is a DTDL JSON example that describes some of the properties of the
centrifugal slurry pump:
{
"@id":
"dtmi:com:XMPro:PumpAssembly;1",
"@type": "Interface",
"@context": "dtmi:dtdl:context;2",
"displayName": "Pump
Assembly", "contents":[
{
"@type": "Property",
"name":
"Description",
"schema": "string"
},
{
"@type": "Property",
"name":
"PumpType",
"schema":
"string"
},
{
"@type": "Property",
"name":
"MotorRatedPower",
"schema": "double"
},
{
"@type": more types
and properties here
}
]
}
The following screenshot shows an example of a low-code application
development platform with telemetry data being sent to an Azure Digital Twin
and time series insights:
23. Figure 4.12 – DTDL-based Digital Twin graph in the Azure Digital Twins service
The following screenshot shows how the pump telemetry data from the
preceding screenshot, modeled with DTDL, is presented in an end user interface:
Figure 4.13 – DTDL-based Digital Twin graph in the Azure Digital Twins service
24. AAS and DTDL highlight some of the technical considerations that need to be
addressed when deciding on the underlying enabling technology you will use to
develop your Digital Twin. There are many other technical aspects such as
security, trustworthiness, remote access, communication requirements, and user
interfaces that we will address at a high level later in this book when we develop
our first Digital Twin. Next, we will look at the digital platforms that support the
Digital Twin development process.
Exploring digital platforms for Digital
Twins
Digital Twins require a digital environment to help build and deploy their
applications. Digital platforms for Digital Twins often consist of several
components that are orchestrated together to provide a Digital Twin-enabling
technology stack.
These digital components include the following:
IoT platforms
Business Process Management platforms
Analytics and data platforms
Application platforms
Let's see how these digital components can be used to create a digital
environment for building and deploying these applications.
IoT platforms
IoT platforms typically consist of several capabilities that connect IoT devices to
analytics and business applications. Traditional Operational Technology (OT)
platforms connect to proprietary devices and control systems. In contrast, IoT
platforms connect to a broad range of IoT devices through open protocols and
make the information available to operational and business applications.
IoT platforms support the development of Digital Twins by doing the following:
Monitoring IoT endpoints and real-time data streams
25. Supporting both proprietary and open industry connectivity protocols
for connectivity and data transfer
Enabling physics- and math-based analytics on the IoT data
Providing edge, distributed, and cloud compute options
Providing integration through APIs for application development
Contextualizing IoT data with information from business and
operational systems
Some of the key capabilities of advanced IoT platforms include the
following:
Device management
Data integration
Data storage and management
Analytics
Application development support
Security and privacy
Combining these capabilities provides the ideal technology foundation
for a
Digital Twin development project. The objective of a Digital Twin, however, is not
just around the real-time sensor or IoT information, but rather around the
business outcomes that the Digital Twin drives.
These business outcomes are influenced by the actions that are taken due to the
insights gained from the Digital Twin. Changing or adapting business processes,
as mentioned earlier in this chapter, is a critical success factor for the long-term
value of a Digital Twin.
Some of the representative vendors for IoT platforms include the following:
Alleantia (alleantia.com)
Particle (particle.io)
Microsoft (microsoft.com)
Relayr (relayr.io)
Thingworx (ptc.com)
IMPORTANT NOTE
Please note that this is representative and not an exhaustive list. This has
been provided as a reference.
Information from a Digital Twin can either provide decision support for an
operational user or be used for process automation to remove the human from
the loop. Business process management platforms can address both.
26. Business Process Management platforms
Business Process Management (BPM) platforms for Digital Twins may
significantly overlap with IoT platform capabilities, but the focus of BPM
platforms is more toward driving the business process or workflow resulting
from IoT data. BPM focuses less on IoT device management, connectivity, and
communication protocols.
Advanced BPM solutions also provide low-code configuration environments
aimed at subject matter experts to help configure workflows, processes, and
business rules to actuate or automate actions. This process not only features data
from IoT sources but also embeds advanced analytics into running processes to
generate insights and situational awareness from the Digital Twin applications.
Some of the representative vendors for BPM platforms include the following:
Avolution (avolutionsoftware.com)
Boxarr (boxarr.com)
iGrafx (igrafx.com)
QPR (qpr.com)
XMPro (xmpro.com)
IMPORTANT NOTE
Please note that this is representative and not an exhaustive list. This has
been provided as a reference.
BPM platforms deliver the actions from Digital Twins, but the solution may
require more of a data and analytics focus to identify the course of action.
Analytics and data platforms
Analytics and data platforms are digital tools that provide advanced
analytics capabilities. These capabilities include machine learning, artificial
intelligence, and high-fidelity physics capabilities.
These analytics platforms generally rely on IoT platforms for data ingestion from
sensors and control systems sources. Additional data management capabilities
include historian services, data lakes, and prebuilt analytics libraries for specific
equipment types.
27. Analytics and data platforms are often used in conjunction with BPM or
application platforms to visualize and action the outcomes from the analytical
insights.
Some of the representative vendors for analytics and data platforms include the
following:
ANSYS (ansys.com)
C3.ai (c3.ai)
OSIsoft (osisoft.com)
Sight Machine (sightmachine.com)
Uptake (uptake.com)
IMPORTANT NOTE
Please note that this is representative and not an exhaustive list. This has
been provided as a reference.
Application platforms
Application platforms are the final category of Digital Twin-enabling
technologies that you need to consider when developing and using Digital Twins
in your organization. Application platforms are used to create vertically focused
Digital Twin solutions in support of existing applications such as Asset
Performance Management (APM), Enterprise Asset Management (EAM),
and Operations Performance Management (OPM). Digital Twins bring new
capabilities to these business applications, and the configuration of Digital Twins
is part of the broader application suite of the vendor itself.
Some of the representative vendors for application platforms include the
following:
AVEVA (aveva.com)
Bentley (bentley.com)
GE Digital (ge.com)
IBM (ibm.com)
Oracle (oracle.com)
SAP (sap.com)
Siemens (siemens.com)
IMPORTANT NOTE
28. Please note that this is representative and not an exhaustive list. This has
been provided as a reference.
The application and use case of your Digital Twin will determine the digital
platform capabilities that you require. Validating the business problem of the
required outcomes, as discussed earlier in this chapter, is crucial in considering
the right digital technology and infrastructure.
This will become more obvious in the next chapter as we start setting up our first
Digital Twin prototype.
Summary
In this chapter, we considered the planning frameworks we need before starting
our first Digital Twin project. We looked at a project planning framework, which
describes the phases involved, and a solution planning framework, which defines
the problem that we are addressing, the users that it is focused on, and the
expected outcome.
Then, we reviewed how to validate the problem statement and expected
outcomes and how this would influence existing and future business processes.
We also considered the impact of our technology decisions. Finally, we provided
a high-level overview of the enabling technologies and the different types of
digital platforms we can use to set up our first Digital Twin in the next chapter.
Questions
1. Describe the project phases of your Digital Twin prototype.
2. Can you create a Lean Digital Twin canvas to describe your solution?
3. What are the primary technology considerations for your Digital Twin
solution?
4. What do think the benefits of a Digital Twin in your organization will
be?