SlideShare a Scribd company logo
Hypothesis Based Testing (HBT) Cookbook
© 2011-12, STAG Software Private Limited. All rights reserved.
STEM is the trademark of STAG Software Private Limited.
HBT is the intellectual property of STAG Software Private Limited.

This e-book is presented by STAG Software Private Limited www.stagsoftware.com




                                                                                 2
Hypothesis Based Testing (HBT) is a scientific personal test methodology that is unique in its approach to ensuring cleanliness of software. It is a
goal focused approach, commencing with setting up of cleanliness criteria, hypothesising potential defect types that can impede this, and then
performing activities to ensure that testing is purposeful and therefore effective and efficient. The central theme HBT is constructing a hypotheses
of potential defects that may be probable, and then scientifically proving that they do not indeed exist. The activities of testing like test strategy,
test design, tooling & automating become purposeful as these are focused on uncovering the hypothesised defect types ensuring that these
activities are done scientifically and in a disciplined manner.

HBT is based on sound engineering principles geared to deliver the promise of guaranteeing cleanliness. Its core value proposition is about
hypothesising potential defects that may be present in the software and then allow you to engineer a staged detection model to uncover the
defects faster and cheaper that other typical test methodologies.

HBT fits into any development methodology and weaves into your organisational test process. HBT is powered by STEMTM (STAG Test
Engineering Method) a collection of EIGHT disciplines of thinking. STEM provides the foundation for scientific thinking to perform the various
activities. It is personal scientific inquiry process that is assisted by techniques, principles and guidelines to decompose the problem, identify
cleanliness criteria, hypothesise potential defect types, formulate test strategy, design test cases, identify metrics and build appropriate automation.




                                                                                                                                                           3
Inspirations from nature
HBT has been inspired by certain ideas and these are discussed below. The inspirations have come from “Properties of matter”, “Fractional
distillation”, “Sherlock Holmes”, “Picture of baby growth”.


Properties of matter
                                                               Physical & Chemical properties of matter allow us to:
                                                               ... classify
                                      “affected by”            ... understand behaviours, interactions
                                                               ... enable checking purity


                                                               How can we use a similar train of thought to identify
                                                               “properties of cleanliness” and then “types of defects”?




                                                                                                         “Properties of the system”
                                                                              End user expectations         Cleanliness criteria




                                                                           Issues in specifications,
                                                                           structure, environment      Potential Defect Types (PDT)
                                                                                and behaviour

                                                                                                                                            4
Inspirations from nature


Fractional distillation
                                   This is a technique to separate mixtures that have components of different
                                   boiling points

                                   In software systems, there exists a variety of defect types that may be present in
                                   the system. How can we apply this thought process to optimally uncover the
                                   defects, by “fractionally distilling” them?

                                   Can we separate these types of defects on the basis of certain properties and
                                   optimally uncover the defects?




From : http://withfriendship.com




                                                                                                                    5
Inspirations from nature


Picture of baby growth
                                                                      The picture shows the health of the foetus/baby .
                                                                      This picture shows size, shape, parts and types of issues not
                                                                      present

                                                                      Seeking inspiration, can we depict the health of software system in
                                                                      a similar manner? Can we measure the ‘intrinsic quality’ at a
                                                                      stage?

                                                                      As we progressively evaluate in a staged manner, certain types of
                                                                      defects detected & removed and therefore quality grows.
                                                                      Can we chart this as “cleanliness index”?




Source :http://www.environment.ucla.edu/media/images/Fetal_dev5.jpg




                                                                                                                                            6
Inspirations

 Sherlock Holmes
                   Sherlock Holmes was person who applied deductive logic to solve
                   mysteries.

                   How can we see inspirations from Holmes to hypothesise the
                   types of defects that may be present and prove presence of these?




                                                                                       7
HBT - A Personal Scientific Test Methodology

Test methodologies focus on activities that are driven by a process which are powered by tools, yet successful outcomes still
depend a lot on experience.

Typically methodologies are at organisational level.


On the other hand HBT is a personal scientific methodology enabled by STEMTM , a defect detection technology to deliver
“Clean Software”




                                                                                                                                8
Scientific approach to detecting defects

Cleanliness criteria     What is the end user expectation of “Good Quality”?


Potential Defect Types   What types of issues can result in poor quality?


Evaluation Stage         When should I uncover them?


Test Types               How do I uncover them?


Test Techniques          What techniques to generate test cases?


Scenarios/Cases          What are the test cases? Are they enough?


Scripts                  How do I execute them?


Metrics & Management     How good is it? How am I doing?



                                                                               9
How is HBT different from other test methodologies?
The typical test methodologies in vogue have relied on strength of the process and the capability of the individual to ensure
high quality in the given cost and time constraints. They lack the scientific rigour to enable full cost optimisation and more
often rely on automation as the means to driving down cost and cycle time. For example, they do not provide a strong basis
for assessing the quality of test cases in terms of their defect finding potential and therefore improve effectiveness and
efficiency.

HBT on the other hand enables you to set a clear goal for cleanliness, derive potential types of defect and then devise a
“good net” to ensure that these are caught as soon as they get injected. It is intensely goal-oriented and provides you with a
clear set of milestones allowing you to manage the process quickly and effectively.


                                                                                                     Goal

                                                                                                            drives
                        la




                                                                                   T
                      ic




                                                                                   B
                   yp




                                                                                   H
                                                                                                  Activities
                  T




                                        Activities




                                                                                                                                defect detection
                             ......................................                    ......................................




                                                                                                                                  Powered by
                                                                      Powered by




                                                                                                                                  technology
                                                                      experience




                                                                                                                                    (STEM)
                             ......................................                    ......................................
                             ......................................                    ......................................
                             ......................................                    ......................................
                             ......................................                    ......................................
                             ......................................                    ......................................

                                                  hopefully
                                                  results in
                                            Goal
                                                                                                                                                   10
Hypothesis Based Testing - HBT 2.0
A Quick Introduction        Personal, scientific test methodology.
                                      SIX stage methodology powered by
                                      EIGHT disciplines of thinking (STEMTM).




                      Setup                Hypothesise
               Cleanliness Criteria    Potential Defect Types

 SUT
                                            Nine Stage
             Cleanliness Assessment
                                       Defect Removal Filter



                                                                                11
A quick introduction to HBT

                  SIX stages of DOING                        powered             EIGHT disciplines of THINKING
                                                                by

                                                                                         D8               D1
                S6                         S1
                                                                                   Analysis &      Business value
                Assess &         Understand
                                                                                   management      understanding
                ANALYSE          EXPECTATIONS
                                                                       D7                                                   D2
                                                                         Execution &                             Defect
                              D8 D1
                                                                         reporting            STEM Core          hypothesis
      Tooling            D7           D2
 S5                           STEM         Understand                                           32 core
      SUPPORT                                           S2
                         D6           D3   CONTEXT                                             concepts
                              D5 D4                                                                            Strategy &
                                                                            Visibility
                                                                                                               planning     D3
                                                                       D6
                Devise           Formulate
                                 HYPOTHESIS                                              Tooling   Test design
                PROOF


            S4                             S3                                            D5                D4




                     HBT                                     powered                        STEM
          Personal test methodology                             by               Defect detection technology


                                                                                                                                 12
D1 Business value understanding    D2     Defect hypothesis           D3    Test strategy & planning

     Landscaping                                                           Orthogonality principle
     Viewpoints                         EFF model                          Tooling needs assessment
     Reductionist principle             Defect centricity principle        Defect centred AB
     Interaction matrix                 Negative thinking                  Quality growth principle
     Operational profiling               Orthogonality principle            Techniques landscape
     Attribute analysis                 Defect typing                      Process landscape
     GQM

D4     Test design
                                                                      D5 Tooling
      Reductionist principle
     Input granularity principle                                           Automation complexity
     Box model                                                             assessment
     Behaviour-Stimuli approach                     32 core                Minimal babysitting principle
     Techniques landscape                          concepts                Separation of concerns
     Complexity assessment                                                 Tooling needs analysis
     Operational profiling
      Test coverage evaluation

     Visibility                           Execution & Reporting            Analysis & Management
D6                                 D7                                 D8


     GQM                                 Contextual awareness              Gating principle
     Quality quantification model         Defect rating principle           Cycle scoping



                                                                                                           13
Connecting HBT Stages to the
Scientific approach to detecting defects

   S1
         Cleanliness criteria             Potential defect types    S3
   S2




           Expectations             Staged & purposeful detection



                                                                    S4



                                           Complete test cases




  S6    Goal directed measures            Sensible automation       S5

                                                                         14
Clear baseline
                                                     Set a clear goal for quality
 Cleanliness criteria       Potential defect types
                                                     Example: Clean Water implies
                   S1, S2                            1.Colourless
                            Staged & purposeful      2.No suspended particles
                                                     3.No bacteria
                                 detection
                                                     4.Odourless
    Expectations
                                                     What information(properties) can be used to identify this?
                             Complete test cases
                                                     ... Marketplace,Customers, End users
                                                     ... Requirement(flows), Usage, Deployment
                                                     ... Features, Attributes
    Goal directed                                    ... Stage of development, Interactions
                            Sensible automation
     measures                                        ... Environment, Architecture
                                                     ... Behaviour, Structure




                                                                                                                  15
A goal focused approach to cleanliness
                                                      Identify potential defect types that can impede cleanliness
 Cleanliness criteria   Potential defect types   S3
                                                      Example:
                                                      Data validation
                                                      Timeouts
                        Staged & purposeful
                                                      Resource leakage
                             detection
                                                      Calculation
    Expectations                                      Storage
                                                      Presentation
                         Complete test cases          Transactional ...

                                                      Scientific approach to hypothesising defects is about looking at

    Goal directed                                     FIVE Aspects - Data, Logic, Structure, Environment & Usage
                        Sensible automation
     measures                                         from
                                                      THREE Views - Error injection, Fault proneness & Failure

                                                      Use STEM core concepts
                                                      > Negative thinking (Aspect)
                                                      > EFF Model (View)




                                                                                      “A Holmes-ian way of looking
                                                                                      at properties of elements”
                                                                                                                        16
Levels, Types & Techniques - STRATEGY
                                                                       NINE levels to Cleanliness

    Cleanliness criteria          Potential defect types
                                                                      L9   End user value
                                                            S4
                                      Staged & purposeful             L8   Clean Deployment
                                           detection

         Expectations                                                 L7   Attributes met

                                      Complete test cases             L6   Environment cleanliness

                                                                      L5   Flow correctness
         Goal directed                Sensible automation
          measures                                                    L4   Behaviour correctness

                                                                      L3   Structural integrity
                        L3
Quality Levels                              Test Techniques (T1-T4)
                         PDT7                                         L2   Input interface cleanliness
                                TT5
                         PDT6
                                TT4             TT3         T3
                L2       PDT5
                                                                      L1   Input cleanliness
                PDT4
                         TT3                    TT2
   L1           PDT3                                        T2
  PDT2        TT2                                TT1        T1
  PDT1        TT1


PDT: Potential Defect Types                                                                              17
Countable test cases &
                                                            Fault coverage

Countable test cases & Fault coverage
                                                      Use STEM Core concepts
 Cleanliness criteria   Potential defect types        > Box model
                                                      > Behaviour Stimuli approach
                                                      > Techniques landscape
                                                      > Coverage evaluation
                        Staged & purposeful
                             detection
                                                      to
    Expectations                                      - Model behaviour
                                                 S4   - Create behaviour scenarios
                         Complete test cases          - Create stimuli (test cases)

                                                      Irrespective of who designs, #scenarios/cases shall be same -
                                                      COUNTABLE
    Goal directed       Sensible automation                          Test Scenarios/Cases
     measures
                                                              R1                                          PDT1
                                                                                  TS    TC1,2,3
                                                              R2                                          PDT2
                                                                          TT
                                                              R3                  TS    TC4,5,6,7         PDT3

                                                             Requirements & Fault traceability

                                                      That test cases for a given requirement shall have the
                                                                    ability to detect specific types of defects
                                                                                           FAULT COVERAGE


                                                                                                                      18
Focused scenarios + Good Automation Architecture
                                                      Level based test scenarios yield shorter scripts that are
                                                      more flexible for change and easily maintainable.
 Cleanliness criteria   Potential defect types

                                                                L9     End user value
                        Staged & purposeful
                             detection                          L8     Clean Deployment

    Expectations
                                                                L7     Attributes met
                         Complete test cases
                                                                L6     Environment cleanliness
                                                 S5
                                                                L5     Flow correctness
    Goal directed       Sensible automation
     measures
                                                                L4     Behaviour correctness

                                                                L3     Structural integrity

                                                                L2     Input interface cleanliness

                                                                L1     Input cleanliness




                                                                                                                  19
“Cleanliness Index” - Improved visibility
                                                                                                        L4
 Cleanliness criteria            Potential defect types                                               PDT10   TT8

                                                                                                      PDT9    TT7

                                     Staged & purposeful                                        L3    PDT9    TT6




                                                                   Cleanliness
                                          detection
                                                                                               PDT8   TT5
     Expectations
                                                                                               PDT7
                                                                                                      TT4
                                     Complete test cases                                 L2    PDT6
                                                                                        PDT5
                                                                                               TT3
                                                                                 L1     PDT4
S6
     Goal directed                                                               PDT3
                                     Sensible automation
      measures                                                                   PDT2
                                                                                        TT2

                                                                                 PDT1   TT1
                    Quality report                                                              Stage
                CC1     CC2    CC3      CC4
       R1                                          Met
       R2
                                                   Not met
       R3                                                                             “Growth of a baby”
                                                   Partially met
       R4
       R5
                                                                                                                    20
HBT Stages
Six stages to produce clean software




                                       21
Six staged methodology to produce clean software
The act of validation in HBT consists of “SIX Stages of DOING”. It commences with first two stages focused on a scientific approach to
understanding of the customer expectations and the the context of the software. One of the key outcomes of the first two stages is
“Cleanliness Criteria” that gives a clear understanding of the expectation of quality. In the third stage, the Cleanliness Criteria and the
information acquired in the first two stages are used to hypothesise potential types of defects that are probable in the software. The fourth
stage consists of devising a proof to scientifically ensure that the hypothesised defects can be indeed be detected cost-efficiently. The fifth
stage focuses on building the tooling support needed to execute the proof. The last stage is about executing the proof and assessing if the
software does indeed meet the Cleanliness Criteria.

                                                                            Who are the customers, end users, what do they need, and
                                                                       S1
                 S6                      S1                                 what do they expect?

               Assess &         Understand
               ANALYSE                                                      What are the features of the system, what technologies are
                                EXPECTATIONS                           S2
                                                                            used, architecture?

                            D8 D1                                           What types of defects may be present?
                                    D2                                 S3
        Tooling        D7              Understand                           What types of fishes to catch?
   S5                       STEM
        SUPPORT                                   S2
                                    D3 CONTEXT
                       D6
                            D5 D4                                           What is strategy, plan, test scenarios/cases?
                                                                       S4
                                                                            Sherlock Holmes

                Devise         Formulate
                PROOF          HYPOTHESIS                                   What tools do I need to detect the defects?
                                                                       S5
                                                                            Boat in the fishing analogy
                S4                       S3
                                                                            How am I doing? How is quality?
                                                                       S6
                                                                            Fisherman
                                                                                                                                               22
Stage #1 : Understand EXPECTATIONS

                          The perception that end-users have of how well the product delivers the needs denotes the quality of the
    Understand the        software/system. "Needs" represent the various features that the software/system needs to have, to allow the
    marketplace for       end-user to fulfill his tasks effectively and efficiently. "Expectations" on the other hand represent how well the
      the system          needs are fulfilled.

                          The final software/system may be deployed in different marketplaces addressing the needs of various types of
     Understand the       customers. Hence it is imperative that we understand the various target markets (i.e marketplace) where the
  technology(ies) used    software or system will be deployed. There could be different types of customers in the marketplace. Hence it
                          is necessary to identify the various types of customers and then finally identify various types of end-users
                          present in the customer. What we have done now is to start from outward direction i.e marketplace and
                          adopt a customer/end-user centric view to understand the needs and expectations.
Understand deployment
    environment           Once we have identified the various types of customers and the corresponding end-users, we can move on to
                          understand the various technologies that make up the software or the system and also a deployment
                          environment. The intent is to get a good appreciation of the "construction components" and the target
                          environment of deployment. It is imperative that we should have a good understanding of the internal aspects
Identify end user types   and not merely the external aspects of the system.
& #users for each type
                          Now we're ready to go into a detailed analysis of the various types of end-users and the typical number of
                          users for each of these end-users. Subsequent to this, we need to identify the various business requirements
  Identify business       i.e. "needs" for each end-user.
requirements for each
      user type           At the end of the stage, the objective is to have a good understanding of the various end-users and their
                          needs paving the way to understanding expectations clearly.


                                                                                                                                             23
Needs & Expectations
                                                                         NEEDS
                                            Customers in    End users    Should write
                                                                         Should have a eraser
                                                               Kids
                                            Education                    EXPECTATIONS
                                                                         Should be attractive
                                                             Seniors
                                                                         Should be non-toxic
                                                                         Lead should not break easily

      Product                                                Artists
                                             Drawing
      e.g Pencil                                                         NEEDS
                                                           Draftsmen     Should write
                                                                         Should not need sharpening


                                                           Management    EXPECTATIONS
                                             Corporate                   Thickness should be consistent
                                                                         Variety of thickness should be
                                                           Engineering    available
                                                                         Variety of hardness should be
                                                             Admin        available


Needs typically features that allow to get the job done.
Expectations are how well the need is satisfied.

Remember Functional & Non-functional requirements ?

                                                                                                          24
What does “understanding” involve?
Good understanding of what is expected is key to effective testing. To accomplish this, it is imperative that we commence from understanding
who the various types of end users, their requirements and subsequently the expectations that they have from these. Having a deep domain
knowledge helps immensely. But what if I this is a domain that I am not very conversant with? Is there a scientific way to undertand?

Understanding is a non-linear activity, it is about identifying the various elements and establishing connections between these. In the process of
connecting the dots, missing information is identified, leading to intelligent questions. Seeking answers to these questions aids in deepening the
understanding.

                                                                              These are some of the elements that need to be understood.
                                                                              Some of the information elements are “external to the system” i.e.
                                                                              marketplace, customer types, end users, business requirements
                                                                              while some are “internal to the system” i.e. features, architecture,
                                                                              technology etc.

                                                                              Stage #1 (Understand EXPECTATIONS) focuses on “external
                                                                              information while Stage #2 (Understand CONTEXT) focuses on
                                                                              “internal information”.




                                                            “Good testing is about asking intelligent questions leading to deeper understanding.”
                                                                                                                                                     25
Information extracted & artefacts generated

   Information              At each stage, certain information is extracted, understood and transformed into artefacts useful to
                            perform effective & efficient testing.

    Marketplace


     Customers
                                                  Artefacts
                                                                            The key outcomes as demonstrated by the
                                                                            artefacts are:
     User types
                                             System overview                 ‣The big picture of the system
                                                                             ‣The various end users ascertained for
   Requirements                                                               different classes of customers in different
                       HBT
                                                                              marketplaces
                     Stage #1                   User type list
    Deployment                                                               ‣A list of business requirements for each
                                                                              type of end user.
   environment.
                                             Requirement map
    Technology


   Lifecycle stage
                      In Stage #1, the focus is on external
                      information that relate to marketplace,
    #Users/type       customers, end users andHBT Stage 1
                                                 business                     “Good understanding is key to effective
                      requirements. This stage is useful to get the           testing. Identifying who will use what is the
                      bigger picture of the system and its potential          beginning to become customer-focused”
                      usage and the how it is deployed.
Deliverables from Stage #1


                                Should contain a a good overview of the marketplace, the various types of customers, end-users types,
  System overview
                                deployment environment and technologies that will be used to build the system.


                               Should contain a list of the various types of users for different types of customers in various market
    User type list
                               segments.


                                Should contain a list of the business requirements and high level technical features mapped to the various
  Requirement map
                                individual types.




STEM Discipline applied in Stage #1
 The STEM discipline “Business value understanding” of STEM is applied in this stage of HBT.
 The two STEM core concepts of “Landscaping” and “Viewpoints” are useful in this stage to scientifically understand the expectations.




                                                                                                                                             27
Stage #2 : Understand CONTEXT
                            In this stage the objective is to understand the technical context in terms of the various features, their relative
    Identify technical      business value, the profile of usage and ultimately arrive at the cleanliness criteria. Note that at this stage, we
  features and baseline     are moving inward to get a better understanding of technical features of the system.
          them
                            Having identified the various business requirements mapped by each type of end-user, the next logical step is
                            to drill-down to the various technical features for each business requirement. It is important to understand
       Understand           the various technical features that constitute the entire system do not really work in isolation. Therefore it is
      dependencies          necessary to understand the interplay of the features i.e. understand the dependencies of a feature with other
                            features. Understanding this dependency is very useful at later stages of the life cycle, particularly to regress
 Understand profile of       optimally.
        usage
                            We now have a list of requirements and the corresponding technical features mapped by each end-user. We
                            are ready to proceed logically to understand the profile of usage of each of the features by the various end-
Identify critical success   users. To do this it is important to understand the typical in the maximum number of users for each user type
         factors            and then the volume of usage by each user for every technical feature. Since we already have a mapping
                            between the end-user type and the technical feature feature, all we have to do is to understand as to
                            approximately how many times this feature will be used by typical end-user of that end user type. The intent
Prioritize value of end     of this is to gain a deeper understanding of the usage profile to enable an effective strategy formulation at the
users(s) and features       later stage of HBT.

                            It is not only sufficient that the features work correctly, it is equally important that the various attributes of
 Ensure attributes are      the nonfunctional aspects of the various features are indeed met. Typically nonfunctional aspects of the
       testable             system are identified in the highest system level, and typically turn out to be fuzzy. Good testing demands that
                            each requirement is indeed testable. In HBT, attributes are identified for each key feature and then aggregated
                            to form the complete set of nonfunctional requirements. We will do this in two stages: firstly identifying the
   Setup cleanliness        critical success factors for the technical features and thereof the business requirement and then detailing the
       criteria             critical success factors to arrive at the nonfunctional requirements or attributes. Hence after figuring out the
                            usage profile, identify the success factors for each business requirement.
                                                                                                                                                  28
Stage #2 : Understand CONTEXT

Good testing is not about testing all features equally, it is about learning to focus more on those requirements/features that affect the customer
experience significantly. This does not imply that some requirements/features are less important than the others, it simply means that some
requirements/features are more important . Before we start detailing the various attributes, it is worthwhile to the prioritize the various
requirements /features and also various end-user types. To prioritize, start by prioritizing the various types of end users in terms of their
importance to the successful deployment of the final system. Subsequently rank the importance of each of the requirement/feature for each of
the end-user type. At the end of this exercise, we should have a very clear understanding of the business value of each requirement/feature. Note
that the understanding of usage profile comes in very handy here.

Now we are ready to derive the various attributes from the previously identified success factors and ensure that they are testable. A testable
requirement simply means that it is an unambiguously possible to state whether it failed all passed after executing it. In the context of attributes,
testability implies that each attribute does indeed have a clear measure/metric. Therefore it is necessary to identify the measures and the
expected value of the measures for each of the attribute.

Having identified the various technical features and the corresponding attributes, the usage profile in the ranking of the requirements/features,
we are now set to identify the various criteria that constitute the cleanliness of the intended software. Cleanliness criteria in HBT represents
testable expectations. Cleanliness criteria provides a very strong basis for ensuring a goal-focused testing. This allows one to identify potential
types of defects and then formulate an effective strategy in the complete set of test cases It is important that the cleanliness criteria is not vague
or fuzzy.




                                                                                                                                                         29
Information extracted & artefacts generated
                                                                         At each stage, certain information is extracted,
                                                       Artefacts         understood and transformed into artefacts useful
                                                                         to perform effective & efficient testing.
      Information
                                                      Feature list       The key outcomes as demonstrated by the
         Features                                                        artefacts are:
                                                  Value prioritization    ‣A clear list of technical features
                                                        matrix            ‣Ranking of features to focus on high risk
           Usage                                                           areas
                                                                          ‣Profile of usage
                                   HBT               Usage profile         ‣List of attributes
       Focus areas               Stage #2                                 ‣Feature interactions
                                                     Attributes list
                                                                          ‣Clarity of expectations outlined as
        Attributes                                                         “Cleanliness criteria”

                                                  Interaction matrix
       Interactions

                                                  Cleanliness criteria



 In Stage #2, the focus is on internal
 information that relate to technical features,
 their interactions, focus areas, attributes,
 architecture, technology.


                                                                                                                            30
Deliverables from Stage #2
       Feature list             Should contain the list of technical features, that forms the technical features baseline.

  Value prioritization
                                Should contain a set of users, requirements and features ranked by importance.
        matrix

      Usage profile              Should contain a the profile of various operations by various end users over time.

                                Should contain the key attributes stated objectively i.e. state expected value for all the measures
      Attributes list
                                for each attribute.

                                Should contain the which feature affects what. Note that this should list the interactions and not the details
   Interaction matrix
                                of interactions. The objective is to get a rapid understanding of the linkages.

   Cleanliness criteria           Should contain criteria that need to be met to ensure that the deployed system is indeed clean.



STEM Discipline applied in Stage #2
 The STEM discipline “Business value understanding” of STEM is applied in this stage of HBT.
 The STEM core concepts of “Interaction matrix”, “Operational profiling”, “Attribute analysis” and “GQM” are useful in this stage to
 scientifically understand the context.



                                                                                                                                                 31
Cleanliness criteria
Cleanliness criteria is a mirror of expectations, The intention is to come up with criteria that if met will ensure that system meets the
expectations of the the various end users. This is not be confused with “Acceptance criteria”, as “Acceptance criteria” is typically at a higher
level. Acceptance criteria is typically “extrinsic” in nature i.e. it describes aspects like long duration running, migration of existing data, clean
installation and running in the final deployment environment, delivering stated performance under real-life load conditions.

Cleanliness criteria represents the “intrinsic quality” i.e. what properties should the final system have to ensure that it is deemed clean?
Use the properties of the FIVE aspects of Data, Business logic, Structure, Environment, Usage as applied to your application to arrive at these
criteria specific to your application.

Note that the cleanliness criteria should both the the functional and non-functional requirements.




                                             The recommended style of writing Cleanliness criteria is:
                                             “That the system shall meet ....”

                                             Examples:
                                             That the system is able to handle large data (need to qualify large)
                                             That the system releases resources after use.
                                             That the system displays meaningful progress for long duration activities.
                                             That the system is able to detect inappropriate environment/configuration.




                                                                                                                                                        32
Stage #3 : Formulate HYPOTHESIS

 Having understood the expectations and the context resulting in the formulation of cleanliness criteria, we are ready to hypothesize
 the potential defects that could affect the cleanliness criteria. This is one of the important stages of HBT resulting in a clear
 articulation of the various types of defects and forms the basis for the remaining stages of HBT.

 The key idea is to use the external information like the feature’s behaviour, environment, attributes, usage and internal information like
 construction material i.e technology, architecture to hypothesize the potential defects that may be present in the software under
 construction. Also note that the history of the previous versions of the software or similar systems can also be used to construct and
 strengthen the hypothesis. Having hypothesized the potential defects, it is possible to scientifically construct a validation strategy and
 design adequate test cases, thereby ensuring that the final system to be deployed is indeed clean.

 The FIVE key aspects useful for constructing hypotheses of defects are: data, business logic, structure, environment and usage. This
 HBT stage allows us to follow a structured &scientific approach to the hypothesize the potential effects ensuring that we do not miss
 any.




                                                                                                                                              33
Stage #3 : Formulate HYPOTHESIS (continued)

                                               Firstly use the external information like data specification and business logic
  Identify potential faults for the five
                                               specification to identify the potential defects. The information related to data that could
  aspects - Data, Business logic, Structure,
                                               help are: data type, boundaries, volumes, rate, format, data interrelationships. The intent
  Environment, Usage
                                               should be to get into a "negative mentality" and think of what can go wrong with
                                               respect to all the information related to the data and then produce a list of potential
  Identify potential failures of the five       defects.
  aspects - Data, Business logic, Structure,
  Environment, Usage                           Now use the information related to the business logic to identify the potential effects.
                                               Business logic or the intended behaviour primarily transforms the various inputs i.e.
                                               input data to outputs that the user values. The intention is to identify potential
  Identify potential errors that could be      transformation losses. The information specific to business logic that is useful for
  injected in the five aspects - Data,          arriving at potential defects are : the various conditions and their linkages, values of
  Business logic, Structure, Environment,      conditions, exception handling conditions, access control and dependencies on the
  Usage                                        other parts of the software. Once again, the intent is to get into a "negative mentality",
                                               and identify erroneous business flows of logic.

  Now identify potential defects (PD) &        Up to now the focus has been on using external information like the specification of
  combine PDs, remove duplicate PDs            data and business logic to identify the potential defects. Now focus on the internal
                                               information like structure of the system and construction materials(i.e. language,
                                               technology) used to build the system to hypothesize potential defects. Structure at the
  Group similar PD to form Potential           highest level represents the deployment architecture while structure at the lowest level
  Defect Types (PDT)                           represents the structure of the code. Some of the structural information that could be
                                               useful to hypothesize are: flow of control, resource usage, distributed architecture,
                                               interfacing techniques, exception handling, timing information, threading, layering. As
 Map PDTs to the elements-under-test i.e.
features/requirements                          explained above, continue with the similar train of thought of examining these
                                               information with intent to identify potential problems in the structure.
                                                                                                                                             34
Stage #3 : Formulate HYPOTHESIS (continued)
Having identified potential defects using the behavioural and structural information, examine information related to environment and how they
can affect the deployed system. By environment, we mean the associated hardware and software on which the system is deployed and the
hardware, software and application resources used by the system. The objective is to examine carefully how these can affect the finally deployed
system. Some of the key information related to the environment that could be useful are: hardware/software versions, system access control,
application configuration information, speed of hardware (CPU, memory, hard disk, communication links), environment configuration information
(e.g. #handles, cache size etc), system resources (hardware, OS and other applications).

Up till now we have taken a fault-centric approach of looking for potential faults (aka defects) by examining external or internal information. In
addition to a fault-centric approach, we can also view the system from potential failure points and then identify the potential defects.
Additionally, it is also possible to examine the system from an error injection point of view. That is, understand the kinds of potential errors that
could be injected into the system to irritate the potential defects if any. The objective is to ensure that we have examined the system from all
three views (error centric, fault centric & failure centric) and thereby ensure that we have not missed any potential defects.

A failure centric approach demands that we wear an end-user hat and identify the potential failures that could cause business loss. The cleanliness
criteria formulated earlier could come in very handy as this would force us to think like a customer/end-user. What we trying to do is to ensure
that we have considered all the potential failures and therefore hypothesized the potential defects.

Now move to a user centric view to examine the various ways that an end-user could abuse the system by identifying the various ways errors
could be injected into the system. Not that an end user does not always connote a physical person, it could be another system that interacts with
the system via some interface. so examine the various points of interaction and look at the possibilities of their injection and then hypothesize the
potential defects that could get irritated by these errors. The kinds of information that could be useful here are: workflows, data access, interesting
ways of using the system, accessibility, environmental constraints faced by the physical end-user and potential deviant ways of using the system.

Then consolidate the potential defects and group similar ones into potential defect types (PDT). Finally map the PDTs to the various elements-
under-test i.e. feature/requirements. Now we have a clear notion as to what types of defects that we should look forward to uncovering in what
parts of the system.


                                                                                                                                                        35
Information extracted & artefacts generated
                                                                       At each stage, certain information is extracted,
                                                                       understood and transformed into artefacts useful
                                                                       to perform effective & efficient testing.

    Information


        Data


      Structure                                    Artefacts

                                                                               The key outcomes as demonstrated by the
    Environment                                                                artefacts are:
                       HBT                        PD catalog
                     Stage #3
                                                                                ‣List of potential defect types
    Business logic                                                              ‣Mapping between PDTs & the elements-
                                              Fault traceability                 under-test i.e. Feature/Requirement
                                                   matrix
        Usage


      Attributes


     Past defects      In Stage #2, the focus is on hypothesizing
                       PDTs using the FIVE aspects of Data, Business
                       logic, Structure, Environment & Usage from
                       THREE views - Error-centric, Fault-centric &
                       Failure-centric.
                                                                                                                          36
Deliverables from Stage #3

     PD catalog               Should contain the list of potential defects and the potential defect types

 Fault traceability
                              Should contain the mapping between the potential defect types/potential defects and features/requirements.
      matrix




STEM Discipline applied in Stage #3
 The STEM discipline “Defect hypothesis” of STEM is applied in this stage of HBT.
 The STEM core concepts of “Negative thinking”, “EFF model”, “Defect centricity principle” and “Orthogonality principle” are useful in this
 stage to scientifically hypothesize defects.




                                                                                                                                              37
Stage #4 : Devise PROOF (Part #1: Test Strategy & Planning)
HBT being a goal focused test methodology, the intent is about figuring out an optimal approach to detect the potential of defects in the
system. Therefore strategy in HBT is about staging the order of defect detection, identifying tests that are needed to uncover the specific
defect types and finally choosing test techniques best suited for each type of test.

Typically we have always looked at the levels of testing like unit, integration and system from the aspect of the “size” of entity-under test. Unit
test is typically understood as being done on the smallest component that can be independently tested. Integration test is typically understood
as being done once the various units have been integrated. System test is typically seen as the last stage of validation and is done on the whole
system.

What is not very necessarily very clear is the specific types of defects that are expected to be uncovered by each of these test levels. In HBT,
the focus shifts to specific types of defects to be detected, and therefore the act of detection is staged to ensure an efficient detection
approach.

In HBT, the notion is of quality levels, where each quality level represents a milestone towards meeting the final cleanliness criteria. In other
words each quality level represents a step in the staircase of quality. The notion is to ensure that the defects that can be caught earlier is
indeed caught. So the first step to formulation of strategy is to stage the potential defects and thereby formulating the various quality levels.

However in HBT, there are NINE pre-defined quality levels where the lowest quality level focuses on input correctness progressively going
onto the highest quality level to validate of the intended business value is indeed delivered.




                                                                                                                                                      38
Stage #4 : Devise PROOF (Part #1: Test Strategy & Planning)
   Understand scope
                              Having identified the various types of potential defect types to be detected at various levels, it is now
                              necessary to understand the specific types of tests needed to uncover these potential defects. In HBT each
                              test shall be intensely goal focused. This means that a type of test shall only uncover specific type of defects.
   Choose quality levels
                              The act of test type identification results in specific types of tests to be done at each of the quality levels.

                              Now that we know what types of defects need to be detected when and where what type of tests, we need
 Identify test types
                              to know how to design sufficient yet adequate test cases for each type of test. In HBT, a test technique is one
                              that allows us to design test cases. Based on the types of defects i.e. types of tests, we have to identify the
                              test technique(s) that is best suited for uncovering these types of the defects.
 Identify test techniques
                              Now we have a clearer idea of various types of defects, the levels of detection, types of tests and test
                              techniques., we are now ready to identify the optimal detection process best suited for design/execution of
 Identify detection process
                              test cases. The the act of identifying detection process also allows us to understand whether we need
                              technology support to be able to execute test cases and therefore pave the way for automation strategy.
 Identify tooling needs
                              At this point in time we have a strategy and are ready to develop the detailed test plan. Some of the key
                              elements of the test plan is the estimation of effort and time and formulating the various test cycles. In HBT
                              cycles are formulated first and then effort and time estimated.
 Formulate cycles
                              Finally potential risks that could come in the way of executing the test plan are identified and the risk
                              management plan put in place.
 Estimate effort
                              In summary, a strategy in HBT is a clear articulation of the quality levels, test types test techniques and
                              detection process model.
 Identify risks

                                                                                                                                                 39
Information extracted & artefacts generated
                                                                          At each stage, certain information is extracted,
                                                                          understood and transformed into artefacts useful
    Information                                                           to perform effective & efficient testing.

     Cleanliness
       criteria

        PDT

                                                  Artefacts
     Attributes
                                                                                  The key outcomes as demonstrated by the
                       HBT                      Test strategy                     artefacts are:
     Techniques
                     Stage #4                                                      ‣Test strategy
                                                                                   ‣Test plan
   Deployment env.                                Test plan

    Scope of work


     #Scenarios


        Risks
                        In Stage #4 (Part 1) the focus is on 1
                                                HBT Stage identifying
                        the quality levels, test types, test techniques
                        and the detection process.


                                                                                                                             40
Deliverables from Stage #4 (Part #1)

     Test strategy            Should contain the quality levels, test types, test techniques & detection process


       Test plan              Should contain the test effort estimate, cycle details and the potential risk & mitigation plan.




STEM Discipline applied in Stage #4 (Part #1)
 The STEM discipline “Strategy & Planning” of STEM is applied in this stage of HBT.
 The STEM core concepts of “Orthogonality principle”, “Quality growth principle”, “Defect centered activity breakdown” , “Cycle scoping” are
 useful in this stage to scientifically developing the strategy & plan.




                                                                                                                                               41
Stage #4 : Devise PROOF (Part #2: Test Design)
 The act of designing test cases is a crucial activity in the test life cycle. Effective testing demands that the test cases possess the power to
 uncover the hypothesized potential defects. It is necessary that the test cases are adequate and also optimal.

 In HBT the design is done level-wise and within each level test-type wise. Based on the level & type, the test entity may be different. The test
 design activity for an entity for a type of test at a quality level consists of two major steps, firstly to design test scenarios and then generate
 these test cases for each scenario. Test scenarios are designed entity-wise and therefore there is a built-in notion of requirements
 traceability. In addition to requirements traceability, it is expected that the test scenarios and corresponding test cases are traced to the
 potential types of defects that they are expected to uncover. This is termed “Fault traceability”.




                                                                                                                                                      42
Stage #4 : Devise PROOF (Part #2: Test Design)

  Identify test level to design   The act of test design commences with the identification of the quality level and then the specific type
  consider & identify entities    of test for which the test cases are to be designed. This allows us to identify the various test entities for
                                  which test cases have to be designed.

  Identify conditions & data      Having identified the test entities it is then required to identify the conditions that govern the business
                                  logic and the data elements that drive these conditions. Subsequent to this, build the behavioral model.

                                  Use the behavioral model to generate test scenarios. Then for every scenario, identify the data that
Model the intended behaviour      varies and then generate values for each data element. Finally combine the data values to generate the
semi-formally                     test cases.

                                  Since we have designed scenarios/cases entity-wise, requirements traceability is built-in i.e. the designed
Generate the test scenarios       scenarios/cases automatically trace to the entity (or requirement). Now map the scenarios/cases to the
                                  hypothesized PDTs to build the fault traceability matrix.

For each scenario, generate       Finally assess the test adequacy of the designed scenarios/cases by checking test breadth, depth &
test cases                        porosity.


Trace scenarios to PDT &
entity-under -test


Assess the test adequacy by
fault coverage analysis

                                                                                                                                                  43
Information extracted & artefacts generated
                                                                          At each stage, certain information is extracted,
                                                                          understood and transformed into artefacts useful
    Information                                                           to perform effective & efficient testing.


     Conditions
                                                  Artefacts
       Data
                                             Test scenarios &
       Logic                                       cases                         The key outcomes as demonstrated by the
                      HBT                                                        artefacts are:
                    Stage #4                   Requirements                       ‣Test scenarios & cases
     Structure                                                                    ‣Requirements traceability matrix
                                            traceability matrix
                                                                                  ‣Fault traceability matrix
        PDT                                  Fault traceability
                                                  matrix
   Defect escapes


     Attributes


                        In Stage #4 (Part 2), the focus is on designing
                        test scenarios/cases that can be proved to be
                        adequate and have the power to uncover the
                        hypothesized PDTs.

                                                                                                                             44
Deliverables from Stage #4 (Part #2)

   Test scenarios &
                                Should contain the test scenarios/cases for each entity for all types of tests at various quality levels
         cases

    Requirements
                                Should contain the mapping between the scenarios/cases and the entity-under-test
 traceability matrix


  Fault traceability
                                Should contain the mapping between the scenarios/cases and the PDTs
       matrix




STEM Discipline applied in Stage #4 (Part #2)
 The STEM discipline “Test design” of STEM is applied in this stage of HBT.
 The STEM core concepts of Reductionist principle, Input granularity principle, Box model , Behavior-Stimuli approach, Techniques landscape,
 Complexity assessment,Operational profiling, Test coverage evaluation are useful in design test scenarios/cases scientifically.




                                                                                                                                               45
Stage #4 : Devise PROOF (Part #3: Metrics Design)

                                    In this stage, the objective is to design measurements to manage the process of validation in an
  Identify progress aspects         effective and efficient manner. Since HBT is a good focused test methodology, it is necessary to device
                                    measurements that enable us to clearly show the progress towards this goal.

  Identify adequacy(coverage)       The measurements in HBT are categorized into progress related measures, test effectiveness
  aspects                           measures and system risk measures. Therefore it is necessary to identity the various aspects related
                                    to progress, effectiveness and the system health.

Identify progress aspects           Once the aspects are identified, key goals related to these are identified and then the metrics
                                    formulated. Finally it is necessary to understand when to measure and how to measure.


For each of the aspects identify
the intended goal to meet


For each of these goals, identify
questions to ask


To answer these questions,
identify metrics


Identify when you want to
measure and how to measure

                                                                                                                                             46
Information extracted & artefacts generated
                                                                             At each stage, certain information is extracted,
                                                                             understood and transformed into artefacts useful
                                                                             to perform effective & efficient testing.
    Information


   Quality aspects


  Progress aspects                                    Artefacts

                         HBT                                                         The key outcomes as demonstrated by the
   Process aspects     Stage #4                                                      artefacts are:
                                                   Metrics chart
                                                                                      ‣Chart of metrics that are goal-focused
  Organization goals


   When & how to
     measure




                           In Stage #4 (Part 3), the focus is on designing
                           metrics that will ensure that we stay on
                           course towards the goal.


                                                                                                                                47
Deliverables from Stage #4 (Part #3)

     Metrics chart             Should contain the list of metrics, collection frequency and a how this meets the goal.




STEM Discipline applied in Stage #4 (Part #3)
 The STEM discipline “Visibility” of STEM is applied in this stage of HBT.
 The STEM core concepts of GQM, Quality quantification model are useful in design metrics that are goal-focused.




                                                                                                                         48
Stage #5 : TOOLING support
  Perform tooling benefit         In this stage, the objective is to analyze the support that we need from tooling/technology to
  analysis                       perform the tests. Automation does always imply scripting that is typically automating the
                                 designed scenarios. It could also involve development of test bench, custom tooling to enable the
                                 system to be tested.
  Identify automation scope
                                 This stage of HBT allows you to identify the tooling needs, understand issues/complexity
                                 involved, perform cost-benefit analysis, evaluate existing tools for suitability/fitment and finally
Assess automation complexity     devising a good architecture that provides for flexibility/maintainability before embarking onto
                                 automation.
Identify the order in which
scenarios need to be automated


Evaluate tools


Design automation architecture


Develop scripts


Debug and baseline scripts



                                                                                                                                     49
Information extracted & artefacts generated
                                                                           At each stage, certain information is extracted,
   Information                                                             understood and transformed into artefacts useful
                                                     Artifacts
                                                                           to perform effective & efficient testing.
    Automation                                  Needs & benefits
     objectives
                                                   document

       Scope                                      Complexity
                                               assessment report                   The key outcomes as demonstrated by the
    Scenarios to                                                                   artefacts are:
     automate                                Tooling requirements
                                                                                    ‣The reason for tooling & automation
                       HBT                                                          ‣Challenges involved
                     Stage #5                                                       ‣Requirements of tooling
  Scenario fitness
                                                                                    ‣Scope of tooling & automation
                                                Automation scope                    ‣Architecture of automation
 Technologies used                                                                  ‣Automated scripts
                                                   Automation
                                                   architecture
     Tool info.
                                                Tooling & Scripts
  Complexity info.



                        In Stage #5, the focus is on identifying tooling
                        requirements and building automated scripts
                        that is delivers value & ROI.
                                                                                                                              50
Deliverables from Stage #5
  Needs & benefits            Should contain the technical & business need for automation.
     document

    Complexity
                             Should contain the technical challenges of automation
 assessment report

Tooling requirements         Should contain the requirements expected out of automation


  Automation scope           Should contain scope of automation

     Automation
                             Should contain the architecture adopted to building tooling/scripts
     architecture

  Tooling & Scripts          The actual tools/scripts for performing automated testing




STEM Discipline applied in Stage #5
 The STEM discipline “Tooling” of STEM is applied in this stage of HBT.
 The STEM core concepts of Automation complexity assessment, Minimal babysitting principle, Separation of concerns, Tooling needs analysis
 are useful in adopting a disciplined approach to tooling & automation and deliver the ROI..

                                                                                                                                             51
Stage #6 : Assess & ANALYZE
   Identify test cases/scripts   This stage is where you execute the test cases, record defects, report to the team and take
         to be executed          appropriate action to ensure that the system/application is delivered on time with the requisite
                                 quality.
   Execute test cases, record
          outcomes


       Record defects


 Record learnings from the
  activity and the context


 Record status of execution



 Analyze execution progress


Quantify quality and identify
      risk to delivery

   Update strategy, plan,
  scenarios, cases/scripts

                                                                                                                                    52
Information extracted & artefacts generated
                                                                      At each stage, certain information is extracted,
                                              Artifacts               understood and transformed into artefacts useful
                                                                      to perform effective & efficient testing.
                                        Execution status
                                            report


  Information                              Defect report              The key outcomes as demonstrated by the
                                                                      artefacts are:
   Execution                                                           ‣Report of test execution & progress
                                         Progress report               ‣Defect report
  information
                  HBT                                                  ‣Report on cleanliness aka quality
     Defect     Stage #6                                               ‣Learnings from execution resulting in
  information                          Cleanliness report               improved strategy, scenarios & cases
                                                                       ‣Any other key learnings
    Context                             Updated strategy,
                                        plan, scenarios &
                                              cases


                                          Key learnings



                    In Stage #6, the focus is on ensuring a
                    disciplined execution, intelligent analysis and
                    continuous learning to ensure that the goal is
                    reached.
                                                                                                                         53
Deliverables from Stage #6
   Execution status           Should contain the status of test execution
       report


     Defect report            Should contain defect information


    Progress report           Should contain progress of execution and thereof the cycle


  Cleanliness report          Should contain the cleanliness index and how well the cleanliness criteria has been met

   Updated strategy,
                              Updated strategy, plan, scenarios, cases based on learnings from execution
   plan, scenarios &
         cases


     Key learnings            Key observations/learnings that could be useful in the future



STEM Discipline applied in Stage #6
 The STEM disciplines of “Execution & reporting” and “Analysis and management” of STEM are applied in this stage of HBT.
 The STEM core concepts of Contextual awareness, Defect rating principle, Gating principle, Cycle scoping enable a disciplined execution,
 fosters continual learning and stay focused on the goal.


                                                                                                                                            54
STEM Disciplines




                   55
Discipline #1 : Business value understanding

How to                                 This discipline enables one to understand the system, create a baseline of features, attributes and
Understand a system                    finally expectations. This discipline consists of SEVEN tools, each of which uses certain STEM core
                                       concepts to ensure these are done in a scientific and disciplined manner.
Landscaping | Viewpoints
                                       Good quality implies meeting expectations. This requires that we understand expectations in
How to
                                       additions to the needs as delivered by the requirements. Understanding the intended business
Create a functional baseline
                                       value to delivered is key to this.
Viewpoints | Reductionist principle

How to
Create an attribute baseline

Viewpoints | Reductionist principle

How to                                How to
Identify focus areas                  Understand interdependencies

Value prioritisation | Viewpoints     Interaction matrix

How to                                How to
Understand usage                      Baseline expectations

Operational profiling | Viewpoints     Goal-Question-Metric | Viewpoints

                                                                                                                                             56
Baseline provides the basis for future work

What is to be tested needs to be clear.

Remember Functional & Non-functional requirements?




 Functional Baseline                                 Attribute Baseline
 Consists of list of features to be tested.          The non-functional aspects.
 Essentially a agreed upon list of features.         Agreed upon attributes & their values.




                                                                                              57
Tools in D1 -Business value understanding
                               STEM Core
Tools                                                                                    Description
                               Concepts
                                                        System is viewed as a collection of information elements that are
How to                         Landscaping
                                                        interconnected. This tool enables you to come up with intelligent questions to
Understand a system            Viewpoints
                                                        understand the various information elements and their interconnections.
                                                        Commencing from an external view of end users, various use cases
How to                         Viewpoints               (requirements) are identified and then technical features that constitute the
Create a functional baseline   Reductionist principle   use cases. This tool enables you to clearly setup a functional baseline that is
                                                        used as a basis for strategy, plan, design, tooling, reporting & management.
                                                        In addition to functional correctness, it is imperative that the attributes are
How to                         Attribute analysis
                                                        met,. This tool enables you to identify the attributes and ensure that these are
Create an attribute baseline   Viewpoints
                                                        testable.
                                                        All requirements/features are not equally valued by the end users. This tool
How to                         Viewpoints
                                                        allows you to rank the end users, requirements, features thereby enabling
Identify focus areas           Value prioritisation
                                                        prioritisation of testing based on the risk and perceived value.
                                                        Understanding the real life usage profile is about knowing what operations,
How to                         Viewpoints               #concurrent operations, rate of arrival are in progress at a point in time. This
Understand usage               Operational profiling     tool allows to arrive at the closer to reality potential usage profile of the
                                                        system to ensure effective non-functional tests.
                                                        Understanding how a feature/requirement affects/is-dependent on other
How to
                               Interaction matrix       feature/requirements is useful to understand impact & re-testing effort. This
Understand interdependencies
                                                        tool allows you to rapidly understand the interdependencies.
How to                         Viewpoints
                                                    This tool allows to derive cleanliness criteria that reflect the expectations.
Baseline expectations          Goal-Question-Metric
                                                                                                                                           58
Customers & End Users
                                            Customers in                      End users

                                                                                 Kids
                                            Education

                                                                               Seniors


      Product                                                                  Artists
                                             Drawing
      e.g Pencil
                                                                             Draftsmen



                                                                            Management
                                             Corporate
                                                                            Engineering

                                                                                Admin

A product or an application may be sold in different market places made up of different kinds of customers.
Each class of customer may have different types of end users who use the product.
It is important to understand that each end user may have different needs & expectations.

Testing is about ensuring that the product will indeed satisfy the variety of needs & expectations
                                                                                                              59
Needs & Expectations
                                                                         NEEDS
                                            Customers in    End users    Should write
                                                                         Should have a eraser
                                                               Kids
                                            Education                    EXPECTATIONS
                                                                         Should be attractive
                                                             Seniors
                                                                         Should be non-toxic
                                                                         Lead should not break easily

      Product                                                Artists
                                             Drawing
      e.g Pencil                                                         NEEDS
                                                           Draftsmen     Should write
                                                                         Should not need sharpening


                                                           Management    EXPECTATIONS
                                             Corporate                   Thickness should be consistent
                                                                         Variety of thickness should be
                                                           Engineering    available
                                                                         Variety of hardness should be
                                                             Admin        available


Needs typically features that allow to get the job done.
Expectations are how well the need is satisfied.

Remember Functional & Non-functional requirements ?

                                                                                                          60
Customer Profile
        Customer #1                  Customer #2                  Customer #3                   Customer #4




 Different customers have different types of end users, and differing number of users for type of end user.


                                                                                                              61
Customer Profile & Usage
                                    How many                 What does each one use?
    What types
                                    users                    What is order of importance?
    of users
                                                             What is the usage frequency?



                                                                                                F1

                                                                                                F2

                                                                                                F3

                                                                                                F4
                                                                                                                   System
                                                                                                F5

                                                                                                F6

                                                                                                F7

                                                                                                F8



 Different end users may use the system differently in terms of what they use, frequency of usage and how they value each each feature.

                                                                                                                                          62
Business Value



Ultimately end users need the system to do their job
BETTER, FASTER, CHEAPER and deliver value to their customers.

Understand that it is about “business value” of system - how does the system
help my business to do BETTER, FASTER, CHEAPER.




                                                                               63
Discipline #2 : Defect hypothesis




                                    64
Discipline #2 : Defect hypothesis

How to                            This discipline enables one to hypothesise potential defect types that may be present in the system
Hypothesise defects               under test and setup a clear goal approach to detection/prevention. Goal focused approach implies
                                  that we map the hypothesised potential defect types (PDT) to the elements-under-test i.e feature/
Negative thinking | EFF model |   requirements.
Defect centricity principle
                                  This discipline consists of TWO tools, each of which uses certain STEM core concepts to ensure
                                  these are done in a scientific and disciplined manner.
How to
Setup goal-focus
                                  Hypothesis is done by scientifically examining certain properties of the system and can be
                                  complemented by ones experience.
Orthogonality principle




                                                                                                                                        65
Tools in D2 - Defect hypothesis
Tools                 STEM Core Concepts                                        Description
                                                    Hypothesis is done by examining properties of the system in a
                                                    scientific manner. Examining properties of elements that make up the
                      Negative thinking             system from five aspects (data, business logic, structure, environment
How to
                      EFF model                     and usage) from three views (error-injection, fault-proneness and
Hypothesise defects
                      Defect centricity principle   failure) allows you to scientifically come with potential defects.
                                                    Subsequently grouping similar potential defects, we arrive at potential
                                                    defect types (PDT).
                                                    Mapping the PDT to the elements of the system enables you to be
How to
                      Orthogonality principle       clear as to what type of defect you want to in each element enabling
Setup goal-focus
                                                    you to be goal-focused.




                                                                                                                              66
“Properties of the system”
                               End user expectations
   Cleanliness criteria




             “affected by”




                               Issues in specifications,
Potential Defect Types (PDT)   structure, environment
                                    and behaviour




                                                          67
“Properties of the system”
Expectations
                  Cleanliness criteria
  Needs

  Features
                            “impedes”
Environment
 Behavior

 Structure            Potential Defect Types (PDT)
  Material


                    Expectations delivered
                    by Needs (Requirements)
                    via Features
                    that display Behavior
                    constructed from Materials
                    in accordance to a Structure
                    in a given Environment
                                                     68
Setting up a Clear Goal
Before we invest effort in devising a test strategy, plan & test cases,let us be clear about the goal...

What types of defects are we looking for?



                                                                                                          D8               D1

                      S6                            S1
                                                                                                    Analysis &      Business value
                                                                                                    management      understanding
                    Assess &             Understand
                    ANALYSE              EXPECTATIONS                                   D7                                                    D2
                                                                                          Execution &                           Defect
                                                                                          reporting
                                                                                                                 32 core        hypothesis
                                   D8   D1
                                                                                                                concepts
                                             D2                                                                                  Strategy &
          Tooling            D7                                                              Visibility
   S5                                               Understand                                                                   planning
          SUPPORT                   STEM            CONTEXT            S2               D6
                                                                                                                                              D3
                                             D3
                              D6
                                                                                                          Tooling   Test design
                                   D5   D4

                                                                                                          D5                D4

                    Devise              Formulate
                    PROOF               HYPOTHESIS

                                                                        What types of defects may be present?
                     S4                               S3
                                                                        i.e. what types of fishes to catch
                                                                                                                                                   69
Potential Defect Types


                                                   Functional CLEANLINESS

             CLEAN Entity                implies           +

                                                   Attribute CLEANLINESS




  What types of defects will affect my

  1. Functional behavior
  2. Attributes




                                         affects
    Potential Defect Types(PDT)                           Cleanliness criteria




                                                                                 70
Potential Defect (PD) & Potential Defect Type (PDT)

 We may come up with a variety of potential defects for an entity-under-test.
 A set of similar potential defects (PD) may be grouped into class of defects i.e Potential Defect Type (PDT).
 The intent is to create a set of smaller set of classes of defects to uncover.




                                                                   Example:
                       PDT1
                                                                   PDT1 : User Interface Issues

                                                                     PD1: Spelling mistakes in UI
            PD1                           PD2
                                                                     PD2: UI elements not aligned

                           PD3                                       PD3: UI standards violated


                                                                                                                 71
Information used for hypothesis
                     used to
                   hypothesise
     Intended
   Functionality




    Attributes
   Expectations
                                   Potential
                                  Defect Types


      Defect
      history




     Personal
    experience




                                                 72
Aspects used to hypothesise

                                                                                               Data
 The two broad areas of validation for any entity-under-test are :
 ‣Functionality
 ‣Attributes
                                                                                                  uses
 Our objective is to ensure that the functional aspects of the system
 are correct and that they meet the expected attributes.
                                                                                 used by
 So, how can we hypothesise potential defect types for a given entity-   Usage             Business Logic
 under-test?

 In this discipline of HBT, we decompose the entity into FIVE                                     built using
 elemental aspects that are:
 ‣Data
 ‣Business logic
 ‣Structure                                                                                  Structure
 ‣Environment
 ‣Usage
                                                                                                  uses, lives in
 i.e. A feature is used by end user(s) and implements the behavior via
 business logic that is built using structural materials that uses
 resources from the environment.
                                                                                           Environment




                                                                                                                   73
Views on these Aspects

   Each “Aspect” can be viewed from THREE angles.



         Error injection          What errors can we inject?


                 ERROR
                 irritates
                 FAULT


        Fault proneness           What inherent faults can we “irritate”?



                  FAULT propagates resulting
                  in FAILURE



             Failure
                                  What failures may be caused?




                                                                            74
Aspects & Views Combined

                          Error injection                 Fault proneness                       Failure


                     What kinds of erroneous data    What kind of issues could data   What kinds of bad data can be
        Data
                          may be injected?                      cause?                        generated?


                     What conditions/values can be   How can conditions be messed     What can be incorrect results
    Business Logic
                              missed?                            up?                  when conditions are combined?


                      How can we setup incorrect     How can structure mess up the     What kinds of structure can
      Structure
                            “structure”?                       behavior?                yield incorrect results?


                     What is incorrect environment     How can resources in the         How can environment be
    Environment
                                 setup?              environment cause problems?             messed up?


                      In what ways can we use the    What kinds of usage may be be       What can be poor usage
        Usage
                          entity interestingly?            inherently faulty?                 experience?




                                                                                                                      75
Generalised PDTs for “Data” Aspect




                                     76
Generalised PDTs for “Business Logic” Aspect




                                               77
Generalised PDTs for “Structure” Aspect




                                          78
Generalised PDTs for “Environment” Aspect




                                            79
Generalised PDTs for “Usage” Aspect




                                      80
TWO Important Core Concepts used in
Defect Hypothesis
          Negative Thinking                    EFF (Error-Fault-Failure) model
          ASPECT oriented approach             View oriented approach.



                    Data

                                                   Error injection
                       uses
          used
           by     Business
  Usage
                   Logic
                                                  Fault proneness
                      built using


                  Structure
                                                        Failure

                       uses, lives in


                 Environment
                                        In real life usage, we combine both of these.



                                                                                        81
How to write PDTs
“Language shapes the way we think.”
Hence it is necessary to have a simple and structured approach to documenting the PDTs identified.


When writing PDTs, commence the sentence with

“That the system/entity may/may-not....”


Write this in defect oriented form.
Write each PDT as a sentence.
Do not be verbose.


e.g.
That the system may accept data out of bounds.
That the system may leak resources.




                                                                                                    82
Discipline #3 : Strategy & Planning




                                      83
Discipline #3 : Strategy & Planning

How to                     This discipline enables to adopt a structured and disciplined approach to formulating a goal-focused
Identify scope             strategy, estimating effort and then formulating a plan. In HBT strategy is defined a clear
                           combination of what to test, when to test, how to design scenarios for test and finally test. This is
Cycle scoping              defining the scope of test, types of test, quality levels, test techniques for design and what tooling
                           support is need to execute the strategy.
How to
Formulate strategy
                           This discipline consists of SIX tools, each of which uses certain STEM core concepts to ensure
                           these are done in a scientific and disciplined manner.
Orthogonality principle
Quality growth principle
Process landscape
Techniques landscape

How to                     How to
Formulate cycles           Estimate effort

Cycle scoping              Defect centred activity breakdown
Quality growth principle   Approximation principle

How to                     How to
Assess tooling support     Setup criteria

Tooling need analysis      Gating principle


                                                                                                                                   84
Tools in D3 - Strategy & Planning
Tools                    STEM Core Concepts                                           Description
                                                    The focus of this tool is allow you to clearly identify the scope of testing that
 How to
                         Cycle scoping              is expected of you by identifying the types of tests i.e. PDTs that you are
Identify scope
                                                    expected to uncover.
                         Orthogonality principle    Strategy is about identifying levels of quality, types of tests, test techniques for
How to                   Quality growth principle   ensuring adequacy and the mode of execution of cases. This tool enables you
Formulate strategy       Process landscape          to approach a disciplined approach to developing a goal-focused strategy that
                         Techniques landscape       will be effective & efficient
                                                    Leveraging technology to develop custom tooling and automating scenarios is
How to                                              key to improving efficiency and effectiveness.This tool enables you to clearly
                         Tooling need analysis
Assess tooling support                              identify the tooling & scripting requirements to you leverage your investment
                                                    in tooling & automation.
                                                    Using PDTs as the basis, this tool enables a logical way to estimate effort.
                                                    Having identified PDTs and mapping this to the elements-under-test and
                         Defect centred activity
How to                                              identifying the types of test to uncover these and deriving #cycles of test by
                         breakdown
Estimate effort                                     scoping out cycles, this tool proceeds to estimate the effort for each element-
                         Approximation principle
                                                    under-test for each type of test for every cycle and sums these to arrive at
                                                    the potential total effort.
                                                    Formulating cycles requires a clear focus of the scope of every cycle. This tool
How to                   Cycle scoping              enables you to be clear as to what PDTs you plan to uncover at different
Formulate cycles         Quality growth principle   points in time of the development in line ensuring that the quality growth is in
                                                    accordance with the quality levels.
                                                    Effective & efficient testing implies that good defects are indeed are found the
How to
                         Gating principle           right stages of software development. This tool enables setting criteria for
Setup criteria
                                                    each stage of development and release.
                                                                                                                                           85
Es         C
                            ti m l ea
                                 at        rp
                                    io         l
                                      n, an o
                             In                    f
                                fra Sch
                                    st       ed acti
                                                       o




     Planning
                                       ru        u
                                          ct ling n
                                            ur
                                              e      wo
                                                       rk




                             En
                                su
                                   r
                                   i
                  Au        Te ng h
                       to      s          i
                         m t te gh c
                          at




     Design
                             io chn ove
                               n
                                 ar ique rage
                                                            Strategy should help in




                                   ch          s
                                      ite
                                          ct
                                             ur
                                               e



                                C
                             os
                                te
                  W                f
                    ha      C fect
                      ti      y          ive
                         s m cle             e
                            an plan xec
                              ua         n        u
     Execution



                                 l/a ing tion
                                     ut
                                        om
                                           at
                                              ed
                                                ?



                          M            St
                           et            ay
                              ric          in
                         H          s- go
                           ow W n t
                             to hat rack
                               in
                                  te & w
                                    rp
                                       re hen
     Assessment




                                         t
86
Contents of a test strategy

 Features to focus on
 List down major features of the product.
 Rate importance of each feature(Importance = Usage frequency x Failure criticality).


 Potential issues to uncover
 Identify the PDTs that you look forward to detecting.


 Quality Levels
 Identify the levels of quality that are applicable and map the PDTs to these levels.


 Tests & Techniques
 State the various tests that need to be done to uncover the above PDTs.
 Identify the test techniques that may be used for designing effective test cases.


 Execution approach
 Outline what tests will be done manually/automated.
 Outline tools that may be used for automated testing.


 Test metrics to collect & analyse
 Identify measurements that help analyse the strategy is working effectively.
                                                                                        87
Goal-focused strategy
                                              L4     ...        WHAT PDTs to be uncovered
                                                                WHEN (Quality Levels) and
                                             PDT10   TT8        HOW(Test Types)?

                                             PDT9    TT7
                                                                                              Key tests
                                    L3       PDT9    TT6
                                                           L9   End user value
Cleanliness




                                                                                              End to End Flow test
                                   PDT8      TT5

                                   PDT7                    L8   Clean Deployment              SI, Migration, Compatibility
                                             TT4
                          L2       PDT6
                                                           L7   Attributes met                LSPS, Security, Usability,
                         PDT5                                                                 Reliability, Volume
                                   TT3
                L1       PDT4
                                                           L6   Environment cleanliness       “Good citizen” test
               PDT3
                         TT2
               PDT2                                        L5   Flow correctness
                                                                                              Flow correctness test
               PDT1      TT1
                                                           L4   Behaviour correctness         Functionality, Data integrity
                                     Stage
                                                           L3   Structural integrity          Structure test
              In HBT, there exists NINE quality
                                                           L2   Input interface cleanliness   UI test, Usability
              levels, with certain PDTs to be
              uncovered at each level.
                                                           L1   Input cleanliness             Data validation test


                                                                                                                             88
Discipline #4 : Test design

How to                        This discipline enables one come with scenarios/cases that can be proven to be adequate. Design
Model behaviour               of scenarios/cases uses a model based approach with some of a tool to enable to help you build
                              the behavioural model and subsequently generating test scenarios/cases from the model ensuring
Box model                     these are “countable” (i.e can be proved to be sufficient) and traced to faults (i.e. has the power
Techniques landscape          the uncover the hypothesised defects).
Operational Profiling
                              This discipline consists of THREE tools, each of which uses certain STEM core concepts to ensure
                              these are done in a scientific and disciplined manner.
How to
                              This tools in this discipline pay a lot attention of the form& structure of test cases and these
Design scenarios & cases
                              conform to the HBT test case architecture. The structure of test cases is seen as crucial to ensure
                              adequacy and ensure optimality..
Behaviour-Stimuli approach
Techniques landscape
Input granularity principle



How to
Ensure adequacy

Complexity assessment
Coverage evaluation




                                                                                                                                    89
Tools in D4 - Test design
Tools                STEM Core Concepts                                            Description
                                                   This tool enables you to understand the intended behaviour of the element-
                     Box model                     under-test and create a behaviour model to ensure that the scenarios & cases
How to
                     Techniques landscape          subsequently designed are indeed complete. This commences by identifying
Model behaviour
                     Operational Profiling          conditions that govern behaviour and the data elements that drive the
                                                   conditions.
                                                   This tool enables you to design scenarios & cases that can be proved to be
                                                   adequate. Scenario in HBT is a path or flow of a given behaviour while a test
How to               Behaviour-Stimuli approach
                                                   case is a combination of data (stimuli) that makes the system take that path.
Design scenarios &   Techniques landscape
                                                   The focus is ensuring that number of scenarios can be proven to be
cases                Input granularity principle
                                                   “countable”( i.e. no-more or no-less) and therefore the test cases too are
                                                   indeed countable.
                                                   This tool enables you to ensure the designed scenarios/cases are indeed
                                                   adequate. Tracing scenarios/cases to PDTs enables “Fault coverage” i.e.
How to               Complexity assessment         ensuring the PDTs hypothesised can indeed be in covered. In conjunction
Ensure adequacy      Coverage evaluation           with “Countability”, the adequacy can indeed be proved in a logical manner.
                                                   This tool can be also be review/assess completeness/adequacy of existing
                                                   scenarios/cases.




                                                                                                                                   90
Objective of test design


 Test design is a key activity for effective testing. This activity produces test scenarios/cases.
 The objective is to come with complete yet optimal number of scenarios/cases that have the power to uncover good defects.




    Do we have a net that is broad, deep, strong, with
    small enough holes to catch the fishes that matter?




                                                                                                                             91
Effecting testing is the outcome of good test cases.
Therefore the design of test cases plays a crucial role to deliver clean software. Based on the fishing analogy. test cases is
the “net” to catch the “fishes” (defects) and it is necessary that the net needs to be broad, deep, strong with a fine mesh.

In HBT, the test design activity is done quality level-wise and within each level stage-wise. At each level it is done in two
stages - design test scenarios first and then test cases. Test scenarios are designed into entity-wise and therefore there is
a built-in notion of requirements traceability. In addition to requirements traceability, it is expected that the test
scenarios and the corresponding test cases are indeed traced to the potential types of defects that they are expected to
uncover. This is termed as fault traceability.

The act of test design commences with the identification of the test level and then the specific type of test for which the
test cases are to be designed. This allows us to identify the various test entities for which test cases have to be designed.

Having identified the test entities it is then required to partition the problem into two parts: firstly to understand the
behaviour (business logic) and then to understand the various data elements for the business logic. This allows us to
identify the various conditions in the business logic and allow us to model the behaviour more formally. The behaviour
model is used to generate test scenarios. Then for every given scenario, we have to understand the data elements that
vary and then come up with the optimal number of values for each data element. The various values of each data element
are then combined to generate the test cases.

Note that only external specification and therefore black box techniques have been used until now to design the scenarios
and cases. It is equally necessary to use the structural information of the entity in the test to refine the scenarios and test
cases.

Finally we have to trace the scenarios and the corresponding test cases to the potential defects that have been
hypothesized for the entity under test for the given test type. This allows us to ensure that the test cases do indeed have
the power to uncover the hypothesized defects and thereby ensure that the test cases are indeed adequate.

The final step involves assessment of the test breadth, depth porosity and thereby be sure the test cases are indeed
adequate.
                                                                                                                                 92
Approach to test design


   Remember the NINE quality levels..                                            L9   End user value
   The test scenarios/cases are designed level-wise. Note that the entity to
   be tested at each level may be different. For example at the higher levels,   L8   Clean Deployment
   the entities to be tested are requirements/business-flows, whereas at
   lowers levels, it may be screens/APIs etc.
                                                                                 L7   Attributes met
   At each level the approach to test design is:
   .. design test scenarios first and then                                        L6   Environment cleanliness
   .. come up with test cases
                                                                                 L5   Flow correctness

                                                                                 L4   Behavior correctness

                                                                                 L3   Structural integrity

                                                                                 L2   Input interface cleanliness

                                                                                 L1   Input cleanliness




                                                                                                                    93
What is a Test Scenario & Test Case?
When we test, our objective is to check that the intended behaviour is what is implemented.

What do we need to do?
For an entity under test, we need to come up with various potential behaviors and check each one of these. That is we need up a set of
scenarios to evaluate the behaviours.

Test Scenario reflects a behavior and is the path from the beginning to end.




How do we check a behavior?
We do this stimulating the behavior with a combination of inputs and check the outputs.

Test Case is a combination of inputs to stimulate the behavior.


Positive/Negative test scenarios/cases
Positive scenario is the expected behavior of the entity under test.
Negative scenario is behavior that is not expected of the entity under test.

Test cases that are part of positive scenario are positive test cases.
Test cases that are part of negative scenario are negative test cases.




                                                                                                                                         94
Hierarchical test design

 For each entity under test, generate test scenarios first, and then test cases.
 This is Hierarchical Test Design.




                                                                 Combination of the CONDITIONS
                                                                 result in Test Scenarios




                                                                                       Business Logic
                                                                    Inputs            Is a collection of   Outputs
                                                                                         conditions


                                                 Combination of INPUTS
                                                 result in Test cases




                                                                                                                     95
Information needed
for design                            Key tests/
                                      Information needed

                                      End to End Flow test
   L9   End user value
                                      End user scenarios of usage, End user expectations

   L8   Clean Deployment              SI, Migration, Compatibility
                                      Environment (HW, SW, versions), Data volumes/formats,

                                      LSPS, Security, Usability,Reliability, Volume
   L7   Attributes met                Usage profile, data sizes, access controls, security aspects and
                                      other attribute information as applicable

   L6   Environment cleanliness       “Good citizen” test
                                      Environment dependencies & Resource usage info

                                      Flow correctness test
   L5   Flow correctness
                                      Behavioral (conditions) & Data specification

                                      Functionality, Data integrity
   L4   Behavior correctness
                                      Behavioral (conditions) & Data specification

                                      Structure test
   L3   Structural integrity
                                      Information about architecture & code structure

                                      UI test, Usability
   L2   Input interface cleanliness
                                      Interface information and User information

   L1   Input cleanliness             Data validation test
                                      Data specification info needed
                                                                                                        96
What to do when requisite information is missing/not-available?

When analyzing a specification, look for the conditions that govern the
behavior (business logic) and the data.

It is quite possible that all the conditions may not be clearly listed or the values for the conditions are not clearly stated.

What is to be done in such cases?
It is a cardinal sin to ignore missing conditions!

It is imperative that you identify the list of conditions and values that they take.
In the case, these are not available, question!

The true value of effective testing lies in uncovering the missing information.
Note that you have in effect uncovered issues in specification, which is great.




                                                                                                                                  97
How do we know that test scenarios/cases are adequate?
1. Test Scenarios/Cases shall be COUNTABLE.
That is, the number of test scenarios/cases designed shall be proven to no more or no less.
This can only be done (a) if the behavior is modeled and scenarios generated and (b) values for test inputs generated and combined
formally.

2. There shall exist scenarios/cases for each requirement/feature
REQUIREMENTS TRACEABILITY.

3. Each type of defect (PDT) hypothesized for every requirement/feature shall traced to scenarios/cases.
FAULT TRACEABILITY

4. At the lower level, scenarios/cases shall cover all the code (statements or conditions or multiple-conditions or paths)
CODE COVERAGE

                                                                 Countable Scenarios/Cases
                                                                    Feature = Business Logic + Data

                                                                    Business logic is implemented as a set of conditions that have to be met

                                                                    For a given test entity, do we clearly understand ‘all the conditions’ that
                                                                    govern the behavior.
                                                                    Have all ‘effective’ combinations been combined to generate the test scenarios?

                                                                    Do we clearly understand the specification of each test input (data)?
                                                                    Have we generated all the values for each input?
                                                                    Have we combined these values optimally?
                                                                                                                                                      98
Requirements traceability
                                     Requirement traceability is about ensuring that each requirement does indeed have test case(s). So after
     R1                 TC1          we design test cases, we map test cases to requirements to ensure that all the requirements are indeed
                                     being validated. This is typically used as a measure of test adequacy.
     R2                 TC2
                                     Let us consider a situation wherein there is exactly one test case for each requirement. Now are the
                                     test cases adequate? No! Requirement traceability is a necessary condition for test adequacy but not
     R3                 TC3          sufficient.

     ...                 ...         Also understand that the expectation of a requirement is not merely about functional correctness, it is
                                     also expected that certain attributes i.e. non-functional aspects have to be also met. So non-functional
                                     test cases need to be traced too.
     Rm                  TCi




Every test case is mapped to a requirement.
or
Every requirement does indeed have a test case




                                                                                                                                                99
Fault traceability
 PDT1     R1    Having hypothesized the PDTs (Potential Defect Types) in Stage #3, the natural thing to do would be
                to map these to the Requirement (or entity-under-test). This is accomplished as part of Stage #4 to
 PDT2     R2    develop the test strategy.

                Continuing further in Stage #4 the specification of the Requirement is used to design test scenarios
 PDT3     R3    and cases. Note that by in this approach, test cases are automatically traced to Requirements.

   ...    ...   Given that the Requirement could have the PDTs that have been mapped earlier, let us map the
                designed test cases to the PDTs. The intent of this is to ensure that the designed test cases do have
                the power to uncover the hypothesized defects.
  PDTi   Rm
                Mapping the PDTs to each Requirement and its associated Test cases is termed Fault Traceability in
                HBT.

  TC1    PDT1   Fault Traceability in conjunction with Requirements Traceability makes the
                condition for test adequacy Necessary and Sufficient
  TC2    PDT2

  TC3    PDT3

   ...    ...

  TCn    PDTi

                                                                                                                        100
Fault traceability + Requirements traceability

       Requirements traceability is
       “Necessary but not sufficient”


            Fault                           Fault
         traceability                    traceability    Assume that each requirement had just one test case. This implies that
                                                         we have satisfied the required traceability objective.

 PD1               R1             TC1              PD1   What we do know is that could there additional test cases for some
                                                         of the requirements?
 PD2               R2             TC2              PD2
                                                         So requirements traceability is a necessary condition, not a sufficient
 PD3               R3             TC3              PD3   condition.
 ...               ...            ...              ...   So, what does it take to be sufficient?
 PDn               Rm             TCi              PDn   If we had a clear notion of types of defects that could affect the
                                                         customer experience and then mapped these to test cases, we have
                         Requirements                    Fault Traceability). This allows us to be sure that our test cases can
                          traceability                   indeed detect those defects that will impact customer experience.




                                                                                                                                  101
Test design documentation


                        Useful to clarify intent/ setup goal

Test objective                                                 Questions:
                        Useful to setup test environment
Prerequisites                                                  What is the value of each of these
                                                               information?
Test data combination                                          i.e. How useful are they?
Expected results
                                                               What do these various pieces of
Test steps              Useful to detect defects               information help in?



                        Useful in manual execution and
                        assist in automating scripting




                                                                                                    102
Syntax of test case documentation
Test objective
Describe the test objective in natural language.

Prerequisites
Describe the prerequisites in natural language.

Test scenario description
Write this as a ‘one-sentence beginning with
“Ensure that system does/does-not...”

Test cases
For each scenario list the test cases as a table show below.




Test steps/procedure
Describe the procedure for execution as a series of steps.
1 ....
2 ....

Note:
Be as terse as possible and yet be clear. The intent should be think more rather than document more. Also terseness forces clarity to
emerge.

                                                                                                                                        103
HBT Test Case Architecture
               Organized by Quality levels
                sub-ordered by items (features/modules..),
                 segregated by type,
                  ranked by importance/priority,
                    sub-divided into conformance(+) and robustness(-),
                       classified by early (smoke)/late-stage evaluation,
                         tagged by evaluation frequency,
                            linked by optimal execution order,
                                classified by execution mode (manual/automated)




                                A well architected set of test cases is like a effective bait that can ‘attract
                                defects’ in the system. In HBT, we pay attention to the form and structure of
                                the test cases in addition to the content.

                                The form and structure as suggested by the HBT test case architecture also
                                enables existing test cases to be analyzed for effectiveness/adequacy. This can
                                be done by “flowing the existing test cases” into the “mould of HBT test case
                                architecture”.




                                                                                                                  104
Discipline #5 : Tooling

How to                           Tooling and automation is not simply developing code, it requires a clear analysis and design to
Analyse tooling needs            ensure that the tooling/automation is flexible enough to keep up with the changes of the system
                                 and that it delivers value. This discipline enables you to analysis the tooling needs in a rational
Tooling needs analysis           manner to ensuring that investment in tooling is not wasted and that the subsequent scripts do
Automation complexity analysis   allow up improve efficiency and effectiveness.

                                 This discipline consists of TWO tools, each of which uses certain STEM core concepts to ensure
                                 these are done in a scientific and disciplined manner.

How to
Good scripting

Separation of concerns
Minimal babysitting principle




                                                                                                                                       105
Tools in D5 - Tooling
Tools                   STEM Core Concepts                                              Description
How to                  Tooling needs analysis           This tool enables you to understand what parts of testing needs support of
Analyse tooling needs   Automation complexity analysis   technology in terms of tooling/automation.
                                                         A script once developed has to be in sync with the application/system and
                                                         hence requires continuous maintenance. Also a script when run may
                                                         encounter situations that cause to stop or seek user guidance for
How to                  Separation of concerns
                                                         continuance. This tool enables you to develop good scripts by ensuring a clear
Good scripting          Minimal babysitting principle
                                                         separation of data and code and design of “execution run flow” (i.e what
                                                         script needs to be executed in case this fails) to ensure that the automated
                                                         run is maximised (i.e. as much of scripts are indeed run).




                                                                                                                                          106
Discipline #6 : Visibility

How to                        This discipline enables one to “quantify quality” to enables goal-focused approach to management.
Measure quality               The focus of this discipline is to setup a model for measuring quality and also devise measures that
                              are purposeful and goal-focused.

Quality quantification model   This discipline consists of TWO tools, each of which uses certain STEM core concepts to ensure
                              these are done in a scientific and disciplined manner.



How to
Devise measures

Goal-Question-Metric
Metrics landscape




                                                                                                                                     107
Tools in D6 - Visibility
Tools             STEM Core Concepts                                             Description
                                                This tool enables you to set a model to measure the “intrinsic” quality by
How to                                          enabling to use the “cleanliness criteria” to give a objective picture of the
                  Quality quantification model
Measure quality                                 system quality. This also allows you to come with “cleanliness index” to
                                                quantify quality.
                                                This technique ensures that that you design measures that are goal-focused.
How to            Goal-Question-Metric
                                                rather than setting my measures and then analyzing them, this tool helps you
Devise measures   Metrics landscape
                                                articulate a goal and then derive appropriate measures.




                                                                                                                                108
Discipline #7 : Execution and reporting

How to                    This discipline enables one to ensure that the reporting of information during testing conveys the
Good defect reporting     information that enables purposeful actions to be executed.

Defect rating principle   This discipline consists of TWO tools, each of which uses certain STEM core concepts to ensure
                          these are done in a scientific and disciplined manner.




How to
Learn & Improve

Contextual awareness




                                                                                                                               109
Tools in D7 - Execution and reporting
Tools                   STEM Core Concepts                                         Description
                                                  This tool helps you report the outcomes of testing i.e. defects in a clear
How to
                        Defect rating principle   manner to enable (1) clear understanding problem (2) enable clear resolution
Good defect reporting
                                                  (3) provide learning opportunities for improvement
                                                  The act of a-priori plan/design is useful to the way, but learning from act of
How to                                            testing by being understanding the context is very essential to effective
                        Contextual awareness
Learn & Improve                                   testing. This tool is about sensitizing you this so that the test artefacts are
                                                  continually enhanced with learnings from testing.




                                                                                                                                    110
Discipline #8 : Management

How to                        This discipline takes a “earned value approach” to management (i.e. goal focused). The focus is on
Goal focused management       using the cleanliness criteria and index as the basis to ascertaining where we are with respect to
                              quality in comparison with where we-should-have-been and then ascertaining risks related to
Quality quantification model   quality & release to enable rational & clear management.
Gating principle
                              This discipline consists of ONE tool, that uses certain STEM core concepts to ensure these are
                              done in a scientific and disciplined manner.




                                                                                                                                   111
Tools in D8 - Management
Tools          STEM Core Concepts                                           Description
                                             The tool uses the cleanliness criteria and index to understand as where you
                                             are with respect of the goal. Remember that we commenced with setting up
How to
               Quality quantification model   cleanliness criteria and quality levels. This tool adopts a “earned value
Goal focused
               Gating principle              approach to quality” by enabling you to assess as where you in the quality
management
                                             level and helps you compare as to where you should be, helping you clearly
                                             understand the gaps and enable you to manage rationally/objectively.




                                                                                                                           112
STEM Core Concepts



                     113
A guideline that lists the set of test design
Techniques Landscape                                              techniques based on method of examination,
                                                                  design stage and the type of defect.




Black Box Techniques
Scenario design       Data value          Test case generation   This is a guideline that lists the various test techniques to allow
Functional test       generation          Exhaustive             you to choose the appropriate ones.
 Decision table       Boundary value      Single fault
 Flowchart            analysis            At least once          The techniques are classified into two categories. The first
 State machine        Equivalence         Orthogonal array       categorisation is based on the type of information used to
                      partitioning        (Pair-wise             design i.e. external information i.e. Black box and Internal
NFT (LSPS)            Special value       combination)           information i.e. white box. The second categorisation is based
 Operational profiling Error based                                on test design outcome - (1) Those are useful for designing test
                                                                 scenarios (2) Those useful to create various test data (3) Those
                                                                 that are useful to combine the test data optimally yet being
White Box Techniques
                                                                 effective.
Control flow based               Data flow based
Cyclomatic complexity           Data flow (def-use)
Statement coverage              Resource based
Decision coverage               Resource leak
Multiple condition coverage
Path coverage




                                                                                                                                 114
A technique to rapidly understand the system
Landscaping     by examining the various elements and the
                connections between them.

              This technique inspired by Mindmapping enables one to
              meaningful questions in a systematic manner to understand
              the needs & expectations.

              It is based on the simple principle:
              “Good questions matter more than the answers. Even if
              questions do not yield answers, it is fine, as it is even more
              important to know what you do not know.”

              The premise is that understanding about SIXTEEN key
              information elements and their connections enables one to
              understand the expectations & system. The act of seeking
              information results in questions that aid in understanding.




                                                                              115
Typical questions generated by Landscaping (1/2)
Marketplace        What marketplace is my system addressing?
                   Why am I building this application? What problem is attempting to solve? What are the success
                   factors?
Customer type      Are there different categories of customers in each marketplace?
                   How do I classify them? How are their needs different/unique?
End user (Actor)   Who are the various types of end users (actors) in each type of customer?
                   What is the typical/max. number of end-users for each type?
                   Note: An end user is not necessarily a physical end user, a better word is ‘actor’
Requirement        What does each end user want? What are the business use cases for each type of end user?
(Use case)         How important is this to an end user - what is the ranking of a requirement/feature?
Attributes         What attributes are key for a feature/requirement to be successful (for an end user of each type of
                   customer)?
                   How can I quantify the attribute i.e. make it testable?
Feature            What are the (technical) features that make up a requirement (use-case)?
                   What is the ranking of these?
                   What attributes are key for a successful feature implementation?
                   How may a feature/requirement affect other feature(s)/requirement(s)?




                                                                                                                         116
Typical questions generated by Landscaping (2/2)
Deployment environment   What does the deployment environment/architecture look like?
                         What are the various HW/SW that make up the environment?
                         Is my application co-located with other applications?
                         What other softwares does my application connect/inter-operate with?
                         What information do I have to migrate from existing system(s)? Volume, Types etc.
Technology               What technologies may/are used in my application?
                         Languages, components, services...
Architecture             How does the application structure look like?
                         What is the application architecture?
Usage profile             Who uses what?
                         How many times does a end user use per unit time? i.e. #/time
                         At what rate do they use a feature/requirement?
                         Are there different modes of usage (end of day, end of month) and what is the profile of usage in
                         each of these modes?
                         What is the volume of data that the application should support?
Behavior conditions      What are the conditions that govern the behavior of each requirement/feature?
                         How is each condition met - what data (& value)drives each condition?




                                                                                                                            117
See the system from various end users’ point of view to identify the
Viewpoints      needs & expectations to set a clear baseline.

               Good testing requires that the tester evaluate the system from the end use angle
               i.e. put oneself in the end-user’s shoes. This is easier said than done.


               Viewpoints is a technique that enables this.

               This states that for each type of user:
           m   1.Has different expectations from the system
      Syste    2.Uses different features due to differing needs
               3.Values different attributes
               4.Views importance of a feature differently
               5.Uses features at different frequency/rate
               6.Has different expectations on quality




               One the various types of users are identified, this technique is useful in digging
               deeper to get a clear handle on needs & expectations.




                                                                                                   118
Manage complexity by decomposing the
Reductionist principle                           information to smaller elements.

    Customer                       End User     Reductionism means reduction, simplification. The objective of this
                                                principle is to breakdown an aspect into smaller parts until it is
     System                    Functionality    understood clearly. The intent is to gain crystal-clear clarity to enable
                                                a job to be performed well.
  Requirement #1              Feature #1
                                                This principle can be applied at various phases of evaluation to
  Requirement #2                                understand various aspects:
                              Feature #2
                                                Phase            Application of this principle
   Requirement                     Attributes                    Break down the system needs into into use cases,
                                                Product          then features.
  Functionality               Measure #1        understanding    Break down the requirements into functional and
                                                                 non-functional aspects.
  Attributes                  Measure #2
                                                                 Break down the entity under test into business
                                                Test design      logic and data components to design functional test
                                                                 scenarios/cases.
                   Engineering                  Complexity       Break down complexity into functional, structural,
                                                assessment       data and attribute complexity.
                    Feature
                                                Effort           Break down large activities into smaller fine-grained
                                                estimation       activities so that effort can be estimated precisely.
                  Business logic

                  Data


                                                                                                                         119
Reductionist principle (continued)
 The principle is to break down anything into a smallest component. The intention is to gain a better understanding.

 So, if you are trying to understand a product,decompose the product into requirements (aka use-cases). Subsequently decompose each use-
 case into various constituent features.

 Decompose the given requirement/feature into functional and non-functional aspects. Decompose the functional aspect of a feature into
 business logic and data.

 In the case of estimation, decompose the act of validation into large grained test life-cycle activities and then break each of these into smaller
 grained activities.

 To understand the complexity, decompose the complexity into functional behavior complexity, structural complexity (how complicated are the
 innards), attribute complexity (what aspects of non-functional behavior are challenging) and data complexity ( size/volume and data inter-
 relationship).




                                                                                                                                                 120
Understand the interrelations of the elements -
Interaction matrix                                          requirements, features.

                          A system is not a mere collection of distinct features; it is the interplay of the various features that
                          produces value. But this also has an important side-effect, the various features may affect each other in
      F1   F2   F3   F4
                          a negative fashion. A highly interacting set of features make the system complex.
 F1        X    X
                          This technique allows us to understand the potential interactions among the features/requirements.
 F2                  X    Modifying a feature may therefore may result in an unwarranted side effect. This technique helps to
                          understand the interaction of the various features of the software and therefore hypothesize the
 F3                  X    potential unwanted side-effects and therefore formulate an effective strategy of evaluation.It is useful
                          to the inter-relationships quickly initially, rather than elaborate the semantics of the interaction. The
 F4   X                   semantics of interaction may be deferred to a point when detailed analysis of a change needs to be
                          done.

                          Understanding the linkages is also useful to appreciate potential side-effects that may affect soem of
                          the key attributes.This is useful in understanding the system complexity and to enable effective
                          strategy formulation and later at optimization of regression tests.




                                                                                                                                  121
A technique to identify attributes expected of the system
Attribute analysis                                                and ensure that they are testable.

It is not only sufficient that each feature is functionally clean, it is equally important that the associated attributes be also met. The challenging
aspect of the attributes that they could be typically fuzzy. Good testing implies that attributes be testable. This implies that each attribute have a
clear measure or metric. For example if performance is one such attribute, it is necessary to understand the performance metric for a feature at
the worst case be t<=T, T being the expected performance metric.

                                   Rather than commence with identifying attributes for the whole system, identify attributes for each
      Attribute       Metric       requirement and then combine these to arrive at system-wide attributes. For each requirement, list the
                                   “critical-to-great-experience” attributes. If it is any easier to do this at the level of features, then do so. i.e.
      A1              a1           identify key attributes for each feature and then arrive at the attributes at the requirement level. Use a
                                   standard attribute list like ISO 9126 to ensure that no attributes are missed out. What we have now, is a list
F1 A2                 b1           of attributes for each requirement.

      A3              c
                                                                                                    Attribute What
      A2              b2            Once the attributes for each requirement or feature             A1             F1(a1),F3(a2)
F2                                  have been identified, now group the common attributes
      A4              d             to formulate the system-wide attributes.                        A2             F1(b1),F2(b2), F3(a3)
      A1              a2
                                    This enables better clarity of as to what each attribute        A3             F1(c)
F3                                  really means ensuring that the attributes or non-
      A2              b3
                                    functional requirements are indeed testable.
                                                                                                    A4             F2(d)




                                                                                                                                                      122
Attribute analysis (continued)
It is quite possible that the attributes are descriptive and therefore hazy/fuzzy. It is now important to ensure that every attribute is testable.
As a first step identify key characteristic(s) for each attribute. For each characteristic, identify possible measures so that we may come up
with a number/metric to ensure clarity. Now identify a measure for that characteristic and then identify the value expected for this metric.


                                      1. Identify these based on users
           Attribute

                                      2. Identify these based on usage patterns
     Characteristic(s)

                                      3.Based on (2) derive technical measures
         Measure(s)

                                      4. Now connect (3) to (2) ensuring that these reflect expectations that are testable
     Expected value(s)



 The benefits of application of this technique are:
 1.That we do focus on the non-functional aspects of the system
 2.That the non-functional requirements are indeed testable.
 3.That we are able to come with up with good questions to extract/clarify non-functional requirements when they are not/ill-stated.




                                                                                                                                                     123
A technique to prioritise the elements to be validated to
Value prioritisation                                                  enable effective and effective testing.
A typical system consists of multiple use cases(requirements) that are used by different types of users in differing frequencies. The business
importance of each use case is different and the same is true of the different user types. Since testing is about reducing the business risk to
acceptable levels, and accomplishing the same in optimal effort/cost, we need to understand the business importance and criticality of users, use cases
and the associated features. This technique enables a logical analysis of prioritisation of value so that test effort is targeted on the right aspects.

Application
Identify the various types of users. For each type of type of users, identify the typical number of users for each user type. If the number of users for
an user type is large, we may conclude that this user type is indeed important. However, just because the number of users for a given user type is
low, we cannot necessary conclude that this user type is not as important. It is important to understand how important this user is to successful
deployment of the system i.e. Understanding the impact of this user type’s expectations is not met.Now combine the number of users for a user type
and the business impact of this user to successful deployment and arrive at the priority of a user type. Do this for all the user types.

In addition to user type prioritisation based, it is necessary that we understand the importance of what a user type does. i.e. what requirements (use
cases/business flows) are most/more important. Here again we can apply the same logic that we we applied for each user type. That is, understand the
frequency of usage and the business impact of a incorrectly implemented requirement. Hence it is important to understand what types of users use
the requirement and how many times they use it in a given span of time. Applying the same logic that low frequency usage may not necessarily
indicate that it is a less important requirement, as that requirement may cause severe business loss if it did not work correctly, despite being used
infrequently.
To arrive at the prioritisation of a requirement, one can breakdown the requirement into its constituent technical features and perform a similar
analysis if it is easier to analyse this from the lower level technical requirements.

The end point of application of this STEM core concept results in a rational way to arrive at prioritisation of features, requirements and user types.

Benefits
This allows us to develop a test strategy that can indeed can focus on the key aspects more, utilising the effort, time and cost effectively and
efficiently. Understanding prioritisation allows us to set the priority of test scenarios/cases to
 ‣enable optimal regression
 ‣enable choosing the key test cases to execute in case of constrained time
 ‣enable correct severity rating of defects e.g. failure of important test cases could result in high severity defects.

                                                                                                                                                           124
Value prioritisation (continued)

User   #Users    Bus. Criticality
Type                                 Understand the business value of the features and their priorities.
                                     Effective testing is about reducing business risk to acceptable levels.
UT1    n1        V V High            This technique helps you rank the various end users, use cases/
                                     features.
UT2    n2        High

UT3    n3        V High


Req./Feature    Usage     Impact
                freq.
                                      Need                    Must-have, Could-have, Nice-to-have
R1(F1-F3)       n1        V V High

R2(F2-F4)       n2        High        Frequency               Heavy, Moderate, Light

R3(F4-F6)       n3        V High      Loss outcome            Huge, Moderate, Acceptable




                                                                                                               125
A technique to identify the usage patterns and
Operational profiling                                                    hence the load profile.

Understanding the rate and number of transactions probably on a real system is critical to ensure that the system is designed well and
later sized and deployed well. Good understanding of the business domain is seen as a key enabler to arrive at the usage profile.
Operational profiling is technique that enables one to scientifically arrive at a real life profile of usage. Good understanding of this
concept alleviates the problem of lack of deep domain knowledge to understand the usage profile. This core concept consists of these
key aspects:
  1. Mode – Represents a time period of usage e.g. End of month, where the usage patterns are distinctive and different.
  2. Key operations (features/requirements) used
  3. Types of end users associated with the key features/requirements
  4. Number of end users for each type of users
  5. Rate of arrival of transactions

In short, for a given mode, identify the end users types and their key operations and then identify the number of users for each type of
user and then identify the rate of arrival of transaction. Employing this core concept allows us to think better and ask specific questions
to understand the marketplace and the usage profile in a typical and worst-case scenario.
The operational profile is extremely useful for creating test scenarios for load, stress, performance, scalability and reliability tests.
So, the profiling would consist of identifying various actors, the various use-cases these actors use the frequency (rate), at which they
use and understand the no. of operations that they would do in different time periods.




                                                                                                                                             126
Operational profiling (continued)

                                                                                                                          Time
   UT1
                                   O1                                     User type   #Users Operation      t1        t2       t3     t4
                                   O2                                                 n1         O1         50            20     30   20
                                                                             UT1
                                                                                      n2         O2         25             0     15   10
                                   O3
   UT2                                                                                n3         O3         100           50     15    0
                                   O4           Software/                    UT2
                                                                                      n4         O4              0        35     35   50
                                   O5            System

   UT3                             O6
                                                                                           200
                                   O7
                                                                                           150
                                   O8
                                                                                           100


1. Identify the key operations of the system                                               50
2. Connect the user types & operations i.e. what operations are used by
which user types                                                                            0               t3       t4
                                                                                                  t1   t2
3. For each user type list out the typical & maximum number of users
4. Identify modes of usage e.g. different times of day/week/month/year                           O1    O2            O3          O4
5. For each mode, approximate the number of operations in a given time
period for each user type
6. Finally approximate the rate of arrival of the operations.
NOTE:
1. An user need not be a physical user, it could be another system
                                                                                                                                       127
GQM                                                                       A technique to ensure that the goal
Goal-Question-Metric                                                      (cleanliness criteria) is indeed testable.




 A technique that helps you to set clear goals.
 Metrics may be viewed as milestone markers towards the goal.
 Collecting metrics is easy, the hard part is “how is it useful in helping me reach my goal?”

 1. Identify goal (s) first
 2. Come up with questions to understand distance from the goal
 3. To answer these questions objectively, identify objective measures




            Goal                      Vague cleanliness criteria are useless.
                                      This technique enables you derive cleanliness criteria that is clear by forcing you identify :

       Q1              Q2             1. What is cleanliness? (Goal)
                                      2. How do you ascertain the cleanliness? (Question)
                                      3. Ensure that this is less subjective i.e. via objective measure (Metric)
  M1        M2    M3        M4




                                                                                                                                       128
A technique to identify potential defect types based
Negative thinking                                                           on “Aspects” of a system

  The objective is to identify potential defects in the entity under test in a scientific manner by adopting a fault centric approach. The intent is
  to think ‘negatively’ on various aspects and thereby identify potential defects in the entity under test.

  Any entity under test, processes data according to certain business logic, is built using structural components, that uses resources from the
  environment, and is ultimately used by certain classes of end users. To hypothesise potential defects in a entity-under-test, the above
  generalisation can be applied in a scientific manner.

                                                                   Aspect         Generalized PDTs
                                                                   Structure      Consuming dynamic resources and not releasing them
Aspect     Generalized PDTs
                                                                                  Error/exceptions not handled well or ignored
Data       Violation of type specification                                         Synchronization issues, deadlock issues, race conditions
           Incorrect format of data (data layout, fixed vs.                        Blocking leading to “hanging” when dependent code does not
           variable length)                                                       return
           Large volume of data
                                                                   Environment Potential defects pertaining to environment may be :
           High rate of data arrival
                                                                               Improper configuration of settings in environment
           Duplication of data that is meant to be unique
                                                                               Non-availability of resources
Business   Missing conditions & values that govern the                         Incorrect versions of dependent sub-systems/components
logic      business logic                                                      Slow connections
           Conflicting conditions
           Incorrect handling of erroneous paths                   Usage          Wrong sequencing of usage
           Impact on attributes e.g. performance, scalability,                    Improper disconnects/aborts
           reliability, security etc.                                             High rate of usage
           Transaction related issues i.e. multiple operations                    Large usage volume
           need to complete, else none should be performed                        Unauthorized usage i.e. violation of access control
                                                                                  Difficult to use i.e. not very intuitive

                                                                                                                                                      129
A technique to identify potential defect types based
Negative thinking                                                on “Aspects” of a system

                 The objective is to identify potential defects in the entity under test in a scientific
Aspects          manner by adopting a fault centric approach.

Data             This technique decomposes an entity into FIVE elemental aspects that are:
                 ‣Data
                 ‣Business logic
Business Logic   ‣Structure
                 ‣Environment
                 ‣Usage                                                                                             Data
Structure
                 The intent is to think ‘negatively’ on these FIVE aspects and thereby identify
                                                                                                                       uses
Environment      potential defects in the entity under test.                                              used
                                                                                                           by     Business
                                                                                              Usage
Usage                                                                                                              Logic

                                                                                                                      built using

                                                Any entity under test, processes data according to
                                                certain business logic, is built using structural                 Structure
                                                components, that uses resources from the
                                                environment, and is ultimately used by certain classes
                                                                                                                      uses, lives in
                                                of end users.


                                                                                                                 Environment

                                                                                                                               130
Generalised PDTs for “Data” Aspect




                                     131
Generalised PDTs for “Business Logic” Aspect




                                               132
Generalised PDTs for “Structure” Aspect




                                          133
Generalised PDTs for “Environment” Aspect




                                            134
Generalised PDTs for “Usage” Aspect




                                      135
A principle to group similar
Defect centricity principle                                    defects into defect types



     System




                Levels




                                   PDTs




                         A principle to group similar potential defects into potential defect types(PDT).
                         The intent is to create a manageable list of PDTs.




                                                                                                            136
EFF Model                                                   A technique to identify potential defect types by seeing the
(Error-Fault-Failure)                                       system from on different “Views”

Errors injected into the system irritate faults causing them to propagate and result in failures.
Failure is what customer observes. High impact failures are the result of severe faults.
EFF enables failure-centric and error-injection-centric thinking to identify potential defects, complementing the fault-centric thinking.

Each “Aspect” can be viewed from THREE angles.



         Error injection             What errors can we inject?


                    ERROR
                    irritates
                    FAULT



        Fault proneness              What inherent faults can we “irritate”?


                    FAULT propagates resulting
                    in FAILURE



              Failure                What failures may be caused?


                                                                                                                                            137
A principle that clearly delineates quality
Orthogonality Principle                                     levels, test types and test techniques.


                                            This principle states that to uncover a defect optimally, you
                                            need to identify the earliest stage of detection (i.e. Quality
                                            level) and identify the specific type of test and use the most
                                            appropriate test techniques (i.e. bait) to ensure that the
                                            scenarios & cases are adequate.
                                   Defect
                                            This allows us to understand the
                                            ‣earliest point of detection
                                            ‣type of test needed &
                                            ‣effective test technique
              Type




                                            i.e. Given a potential defect:
                                            1. What is the earliest point of detection?
                                            2. What type of test needs to be done?
                     Stage/Level            3. What test techniques would be most suitable?
         ue
       iq




                                            Identifying the levels, the corresponding test types and
  hn




                                            techniques is what constitutes a strategy.
   c
Te




                                                                                                             138
A principle to setup progressively improving
Quality growth principle                      levels of quality/cleanliness for an entity
                                              under test.
                                       QL4


                                      PDT10
                                                 Staging quality growth via levels enables clarity of
                                      PDT9       defect detection - “what to detect when”.

                                QL3   PDT9       Reaching the “pinnacle of excellence” is like climbing
                                                 the staircase of quality.

                              PDT8               This also allows us to objectively measure quality.
 Cleanliness




                              PDT7
                       QL2    PDT6

                      PDT5
               QL1    PDT4


               PDT3
               PDT2

               PDT1

                             Stage
                                                                                                          139
A guideline that lists the various “process”
Process landscape                                                             models for test design and execution.


  Process models
  1. Disciplined /Structured        Design first
  2. Ad-hoc/Random/Creative On the fly design
  3. Contextual             Context based design
  4. Historical                     Past issues based design
  5. Experiential                   Domain based design

  Defect type           Process model
  DT1                  1,4
  DT2                  2,3
  DT3                  4


	

Process model employed must be based on the type of defect to be uncovered. Certain types of defects are best discovered using a
 disciplined approach while some may rely on the individual’s creativity at the time of testing. Some of the these may rely on pure domain
 experience while some may better uncovered by a careful analysis of past history of issues and some of them need an good understanding of
 the context of deployment and usage.




                                                                                                                                             140
A technique to analyse the needs of tooling &
Tooling needs analysis                                                 automation.


 Tooling needs can be in            Test     Scenarios        Execution tooling needs
                                    type                                                      Tooling for automating testing costs
 1. Structure analysis                                                                        money. It is necessary therefore ensure to
                                                                 TT1      Manually
                                             A.TS1                        Not only            be be sure of the purpose or the
 2. Installation                                                 TT2
                                     TT1                                  manually            objective to be achieved.
                                             ...
 3. Setup/configuration                                                    Nice to
                                                                 TT3                          This technique enables analysing the
                                             B.TS1                        automate
 4. Data creation                                                                             tooling needs as to what is to be
                                     TT2
                                             ...                                              automated and the reason/benefits.
 5. Test execution                                            Guiding aspects to
                                             C.TS1            automation
 6. Outcome assessment
                                     TT3                      1. Frequent basic tests
                                             ...
 7. Behaviour probing
                                                              2. Regression oriented

                                                              3. Time consuming

                                                              4. Effort consuming

                                                              5. Requires high skills



A technique to analyse tooling needs in a disciplined manner.
 1.First analyse what aspect of test life-cycle needs tooling help.
 2.Later analyse what scenarios cannot be executed manually at all.
 3.Identify what of those scenarios that can be executed manually would be nice to automate
   based on suggested parameters i.e. Guiding aspects to automation..
                                                                                                                                     141
A technique to setup goal focused test cycles
Cycle scoping                                         with clear scope for each cycle.

                                                               Test cycle is the point of time wherein the build
                                                               is validated. It takes multiple test cycles
            Cycle#1                    C1     C2      C4       to validate a product. Each test cycle should have
                                                               a clear scope. Scope of testing in a
                                                               cycle is “what needs to be tested and what
         What features?           F1, F2    F3,F4   F1,F2,     aspect of cleanliness needs to be evaluated”.
         { F1, F2,…,Fn}                             F3,F4
                                                               The scope of a cycle in HBT is a Cartesian
                          Scope
 Scope




                                                               product of the Features (or Entities) and the
               x                                               Types of tests to be executed.
                                  T1        T1,T2   T1,T2
         Test types
         {T1, T2,…,Tn}                              T3         Scope = {Features} x {Test types}
                                                               i.e What features will be tested, what tests will
                                                    QL3        be done is the scope of a cycle.
                                             QL2
                                   QL1                         In short the focus of each cycle is uncover
                                                               certain PDTs enabling a monotonic quality
                                                               growth in line with the intended quality levels.




                                                                                                                    142
Defect centred                                                         A technique to estimate test effort by identifying the
                                                                       various activities required to uncover the potential
activity breakdown                                                     defects in the entity under test.


   CC           PDT             QL            TT            TS             Activities


          Flows                 QL4          TT5                             Design

        Features                QL3          TT4         TT5               Document

         Screens                QL2          TT3                           Automate

      Components                QL1          TT1         TT2                Execute



Estimate effort based on the PDT that have to be uncovered in the various ‘elements’ of the software at different stages.
Identify PDTs to be uncovered, stage them, identify tests,
breakdown the each test into various activities, estimate
effort at leaf level and then sum them.




                                                                                                                                143
Defect centred activity breakdown (continued)

    #Elements,                                                            #Hrs/wk,
                    #Cycles                 #Defects
       #TS                                                                 #Cycles


    Understand      Execute               Log defects                      Manage


      Design &
   Documentation.


      Review        Depends on mode of “doing”
                    Common test cases – Checklist
                    Static /dynamic

     Automate




                                                For a given level, estimate effort based on
                                                 #BasicElements, #TS, #Cycles, #Defects



                                                                                              144
Approximation principle                                                           A principle to aid in scientific approximation.



	

   The measure whose value is to be approximated is based on a set of parameters each having a varying sensitivity to the outcome,
      with a formula that binds these. The value of the parameters needs to be hypothesised, if sensitive, needs to be tested and then the
      formula applied. Iterate based on learning and potential estimated variation.




      1.Identify the key parameters
      2.Work out the formula
      3.Understand which of these parameters are ‘sensitive’ i.e. a small variation can affect outcome grossly
      4.Check if the parameters can be broken down further until their values can be estimated correctly
      5.Now estimate the value of the parameters
            5.1.Guess/Hypothesise based on best judgment
            5.2.Test the hypothesis and correct same to a value closer to reality
      6.Apply the formula and compute the value
      7.Iterate based on learning gleaned out of this approximation cycle/estimated potential variation




                                                                                                                                             145
A technique to rapidly understand the intended functional
Box model                   behaviour of an entity under test by identifying the conditions
                            and then the data and business logic(condition sequencing).


                                   Given an entity to be tested, understand the intended behaviour rapidly
                                   to generate the behaviour model.
      Description of
      business logic               1. Identify the conditions that govern the behaviour first.
 I1                    O1
                                   2. Then identify the data elements that drive the conditions.
                                   3. Finally identify the sequencing of conditions as a flow to understand
                                   the business logic (or behaviour)

 I2                    O2          The focus is to extract the conditions and identify the data elements to
                                   enable construction of a behaviour model and also to discover
                                   unstated/missing behaviour.




                                                                                                              146
Behaviour-Stimuli (BEST)                                                         A technique to design test scenarios and cases
                                                                                 ensuing sufficient yet optimal and purposeful
Approach                                                                         test cases




Testing is about injecting a variety of stimuli and assessing the behaviour by observing the actual with the expected result.
	

Firstly identify behaviours to be validated and then generate stimuli. A behaviour is denoted by a test scenario while test cases represent
stimuli. This is a hierarchical approach to test design, this enables clarity, coverage and optimality.




                                        O1                                   Entity under test
   I1
                                                                        TS #1         …            TS #1
                                        O2
                                                                               TC #1
   I2
                                                                                  TC#2
                                         O3
                                                                                     TC#3



                                                                                                                                              147
A principle to identify the data element
Input granularity principle                                                                 (s) for an entity under test and their
                                                                                            specification.


The notion of what an input is and therefore its specification is based on the level of testing. The input specification at a lower level is
‘fine’ whereas at higher levels, it is ‘coarse’. Fine implies basic data types, whereas ‘coarse’ implies complex/aggregate data types.
	

Understanding this is key to generating test cases appropriate to the level of testing.
                  Coarse
                  Fine




                                                                                                                                             148
A technique to understand an entity’s
Complexity assessment                      complexity to identity suitable test techniques.


                                     Systems that are complex, demand to be tested more carefully.
  Complexity
                                     Some systems are business logic wise complex i.e. too many conditions
                                     and combinations, while some systems are structurally complex.
     Behavioural complexity          Also in certain system the attributes may be demanding and therefore
         Business logic complexity   the complexity may in attributes.

         Data complexity             Complexity can be broken into
                                     1. Functional complexity
         Attribute complexity        2. Structural complexity
                                     3. Attribute complexity
     Structural complexity
                                     If (1) is complex, black box techniques are useful
         Logic complexity            If (2) is complex white box techniques is useful
                                     If (3) is complex, judicious mix of (1) and (2) is necessary
         Resource complexity




                                                                                                             149
Coverage evaluation                                                                  A technique to assess test case adequacy.


Adequacy of test cases is key to clean software.
This principle helps in understanding the test breadth, depth and porosity of test case.
Breadth relates to the various types of tests to uncover the different types of defects.
Depth relates to the various levels of tests to ensure that defects at all levels can be
      uncovered.
Porosity is whether test case is a clear combination of data or not.
Additionally it is necessary to understand the conformance and defect orientation of test
      cases.




                                     Test breadth
                                                                                        Breadth    Types of tests
                                                                                        Depth      Quality levels
                                                                                        Porosity   Test case “fine-ness”
                                                                        Test depth




                                                         Test porosity
                                                                                                                                 150
Automation complexity                                                                   A technique to analyse complexity of
                                                                                              tooling/ automation
      analysis
	

   The complexity of a script and therefore the effort required to design and code the scripts depends on various parameters. A script
      consists of sections of code to setup the condition for test, drive the test, compare the outcome, log information and finally cleanup.
	

   The complexity of the script therefore may be decomposed into individual section complexities and analysed.



        Setup      Complexity depends on #steps, data, inter-relationship

        Driver     Complexity depends on length of flow (#steps), error-recovery complexity

                   Complexity depends on #comparisons and type of comparison (course versus fine) and whether it is deterministic or
        Oracle
                   non-deterministic

        Log        Complexity depends on #log points and log information detAILS

        Cleanup    Complexity depends on #steps, data inter-relationship




                                                                                                                                               151
A principle to ensure unattended
Minimal babysitting principle                                                         automated test runs.



  Test script #1
                   	

   When automated tests are run, some of the scripts may fail and abort the entire test
  Test script #2         cycle. To utilise automation most effectively and increase test efficiencies, it is necessary
  Test script #3         to maximise the test run. i.e. as many scripts that can be run must be executed.
  Test script #4   	

                   	

   This principle states that the test scripts must be designed in a such a manner that
  …                      ‘baby sitting’ i.e. restarting the test run manually must be minimal.
  Test script #N




                                                                                                                         152
Separation of concerns                A principle to ensure delineation of code &
                                      data in automation to facilitate robust and
principle                             maintainable automation..




 Code               	

   A script consists of code and data that it uses to drive the system-under-
                          test. The basic attribute of a good script is its ability to be flexible with
  Common code             minimal changes for adaptation.
                    	

   Hence it is necessary that a script does not contain hard-coded data. The
                          data in a script pertains to configuration/setup and the actual test data. This
  Specific code            principle states that there must be a clean separation of the code and data
                          aspects of the script.


 Data
  Setup/Config.
  information

  Test data




                                                                                                           153
A technique to “quantify quality” in
Quality quantification model                                                                    alignment with the cleanliness
                                                                                                criteria and quality levels.

Quantify software quality to allow for better decision making. Software is invisible and quality is the invisible aspect of this
invisible. This technique enables you to setup an objective measurement system for measuring the quality of software.




                                                  Rate each cleanliness criteria
                                                  Represent these as a Kiviat chart
                                                  Area under a chart for a cycle
                                                  represent the “Quality Index”.




                                                                                                                                       154
A guideline to designing goal oriented metrics
Metrics landscape                                                        to rationally assess quality, delivery risk and
                                                                         test effectiveness


To know where we are, how we are doing, it is
necessary to have a have a beacon to light up the
way to ensure good visibility. “Good goal oriented
metrics is that beacon”.




 Quality                   Progress

                                                     Example : Process
                                                     Effectiveness: Test breadth, depth, Defect escapes
               Metrics                               +:- ratio, Coverage
                                                     Efficiency: Blockers
                                                     Productivity: #TC executed/designed


 Risk                         Process


                                                                                                                           155
A principle to rate defect severity and
Defect rating principle                                                           priority.



Defects are rated by Severity and Priority.
Severity of a defect is decided on by the impact of the defect on the customer.
Priority of a defect is decided by the risk posed to timely release.




                                                                                         Severity
                                                                                         Serious impact implies HIGH severity
                                  System
 Customer                                          Dev team                              Priority
 “Business risk”                                   “Release risk”                        Blocker implies HIGH priority
 decides ‘Severity’                                decides ‘Priority’




                                                                                                                                156
A principle to learn from context to enable
Contextual awareness                                         better understanding and increase test
                                                             effectiveness.

Good testing requires keen observation skills and a
sharp ‘ear to the ground’. Observation of context and
learning from it is key to better understanding and
improvement of test cases.

“Familiarity breeds contempt” - Getting familiar with
the internal workings and, and external behaviour goes
a long way in significantly enhancing the test
effectiveness.




                                                                        Test cycle
                                                                                         Test cases
                                                         Test cases                       (after)
                                                          (before)



                                                                                                           157

More Related Content

PDF
An introduction to Hypothesis Based Testing
PDF
HBT Solution - Part 2 of 6
PDF
HBT - A Revolutionary Approach to Testing Software
PPSX
Usability and Health IT
KEY
Gaining Empathy with your Users - the RTFM of User Experience
PDF
Intro to Vita Beans
PPTX
Ux bootcamp small
PDF
Comparing ooda abstract
An introduction to Hypothesis Based Testing
HBT Solution - Part 2 of 6
HBT - A Revolutionary Approach to Testing Software
Usability and Health IT
Gaining Empathy with your Users - the RTFM of User Experience
Intro to Vita Beans
Ux bootcamp small
Comparing ooda abstract

What's hot (13)

PPTX
Scientific data management from the lab to the web
PDF
Patent valuation using MDMP methodology
KEY
SharePoint and the User Experience
PPTX
Social media mining hicss 46 part 2
PPTX
UX Research
PDF
e Service Prototype
PDF
2011 advanced analytics through the credit cycle
PPTX
Meeting Boeing pos-docs 13sept2012
PPT
Tests and characteristics spr 2012
PPTX
Hcsd talk ibm
PDF
0515 UiGathering Talk - Interaction Design by Stanley
PDF
service quality & usability
PDF
庖丁解牛用户故事 (Splitting Your User Story)
Scientific data management from the lab to the web
Patent valuation using MDMP methodology
SharePoint and the User Experience
Social media mining hicss 46 part 2
UX Research
e Service Prototype
2011 advanced analytics through the credit cycle
Meeting Boeing pos-docs 13sept2012
Tests and characteristics spr 2012
Hcsd talk ibm
0515 UiGathering Talk - Interaction Design by Stanley
service quality & usability
庖丁解牛用户故事 (Splitting Your User Story)
Ad

Similar to Hypothesis Based Testing (HBT) Cookbook (20)

PPTX
My presentation erin da802
PDF
I/O chapter 4
PPTX
Human Assessment of Ontologies
PDF
HBT a revolutionary approach to testing software
PPTX
Diagnosing behavioral problems and perception
DOC
Five Steps to Excellence
PDF
Analysis And Criticisms Of The Rorschach
PDF
Ethics and Integrity by Tommy Seah- A Value Added Audit on Environment
PDF
Rapid software testing
PDF
Exploratory Testing in an Agile Context
PDF
Exploratory Testing Explained
PPTX
Test management
PPTX
Competency models types and techniques
PDF
Talent Circle 4 students
PPTX
Fundamentals of testing
PPS
Lesson 19
PPTX
Responsible AI in Industry: Practical Challenges and Lessons Learned
PDF
STAG Software and HBT Cover Story in The SmartTechie
PDF
EESS Day 1 - Justin Ludcke
PDF
Evil Tester's Guide to Agile Testing
My presentation erin da802
I/O chapter 4
Human Assessment of Ontologies
HBT a revolutionary approach to testing software
Diagnosing behavioral problems and perception
Five Steps to Excellence
Analysis And Criticisms Of The Rorschach
Ethics and Integrity by Tommy Seah- A Value Added Audit on Environment
Rapid software testing
Exploratory Testing in an Agile Context
Exploratory Testing Explained
Test management
Competency models types and techniques
Talent Circle 4 students
Fundamentals of testing
Lesson 19
Responsible AI in Industry: Practical Challenges and Lessons Learned
STAG Software and HBT Cover Story in The SmartTechie
EESS Day 1 - Justin Ludcke
Evil Tester's Guide to Agile Testing
Ad

More from STAG Software Private Limited (20)

PDF
Application Scenarios of "doSmartQA -Smart Probing Assistant"
PDF
Choked by technical debt?
PDF
Are your quality metrics insightful?
PDF
Weighed down by automation?
PDF
Covid19 and Clean Code Part 2 - Process & Criteria
PDF
Seven Thinking Tools to Test Rapidly
PDF
How to test less and accomplish more
PDF
Is regression hindering your progression?
PDF
The Power of Checklist
PDF
The power of checklist
PDF
Webinar - 'Test Case Immunity’- Optimize testing
PDF
Design Scientifically (How to test a user story)
PDF
Setting a clear baseline (How to test an user story #2)
PDF
Question to Understand (How to test an User Story #1)
PDF
Language shapes the way you think
PDF
Deliver Superior Outcomes Using HBT Visualization Tool
PDF
Hypothesis Based Testing – Application and Adaptation for testing Enterprise ...
PDF
Are Your Test Cases Fit For Automation?
PDF
Think better using “Descriptive-Prescriptive” Approach
PDF
Improving Defect Yield - a three step approach
Application Scenarios of "doSmartQA -Smart Probing Assistant"
Choked by technical debt?
Are your quality metrics insightful?
Weighed down by automation?
Covid19 and Clean Code Part 2 - Process & Criteria
Seven Thinking Tools to Test Rapidly
How to test less and accomplish more
Is regression hindering your progression?
The Power of Checklist
The power of checklist
Webinar - 'Test Case Immunity’- Optimize testing
Design Scientifically (How to test a user story)
Setting a clear baseline (How to test an user story #2)
Question to Understand (How to test an User Story #1)
Language shapes the way you think
Deliver Superior Outcomes Using HBT Visualization Tool
Hypothesis Based Testing – Application and Adaptation for testing Enterprise ...
Are Your Test Cases Fit For Automation?
Think better using “Descriptive-Prescriptive” Approach
Improving Defect Yield - a three step approach

Recently uploaded (20)

PPTX
school management -TNTEU- B.Ed., Semester II Unit 1.pptx
PPTX
human mycosis Human fungal infections are called human mycosis..pptx
PPTX
GDM (1) (1).pptx small presentation for students
PPTX
Presentation on HIE in infants and its manifestations
PDF
grade 11-chemistry_fetena_net_5883.pdf teacher guide for all student
PDF
Saundersa Comprehensive Review for the NCLEX-RN Examination.pdf
PPTX
Pharmacology of Heart Failure /Pharmacotherapy of CHF
PDF
Complications of Minimal Access Surgery at WLH
PPTX
Introduction-to-Literarature-and-Literary-Studies-week-Prelim-coverage.pptx
PDF
VCE English Exam - Section C Student Revision Booklet
PDF
STATICS OF THE RIGID BODIES Hibbelers.pdf
PDF
Black Hat USA 2025 - Micro ICS Summit - ICS/OT Threat Landscape
PDF
A GUIDE TO GENETICS FOR UNDERGRADUATE MEDICAL STUDENTS
PDF
FourierSeries-QuestionsWithAnswers(Part-A).pdf
PPTX
Cell Types and Its function , kingdom of life
PPTX
master seminar digital applications in india
PDF
102 student loan defaulters named and shamed – Is someone you know on the list?
PDF
Abdominal Access Techniques with Prof. Dr. R K Mishra
PDF
3rd Neelam Sanjeevareddy Memorial Lecture.pdf
PPTX
Lesson notes of climatology university.
school management -TNTEU- B.Ed., Semester II Unit 1.pptx
human mycosis Human fungal infections are called human mycosis..pptx
GDM (1) (1).pptx small presentation for students
Presentation on HIE in infants and its manifestations
grade 11-chemistry_fetena_net_5883.pdf teacher guide for all student
Saundersa Comprehensive Review for the NCLEX-RN Examination.pdf
Pharmacology of Heart Failure /Pharmacotherapy of CHF
Complications of Minimal Access Surgery at WLH
Introduction-to-Literarature-and-Literary-Studies-week-Prelim-coverage.pptx
VCE English Exam - Section C Student Revision Booklet
STATICS OF THE RIGID BODIES Hibbelers.pdf
Black Hat USA 2025 - Micro ICS Summit - ICS/OT Threat Landscape
A GUIDE TO GENETICS FOR UNDERGRADUATE MEDICAL STUDENTS
FourierSeries-QuestionsWithAnswers(Part-A).pdf
Cell Types and Its function , kingdom of life
master seminar digital applications in india
102 student loan defaulters named and shamed – Is someone you know on the list?
Abdominal Access Techniques with Prof. Dr. R K Mishra
3rd Neelam Sanjeevareddy Memorial Lecture.pdf
Lesson notes of climatology university.

Hypothesis Based Testing (HBT) Cookbook

  • 2. © 2011-12, STAG Software Private Limited. All rights reserved. STEM is the trademark of STAG Software Private Limited. HBT is the intellectual property of STAG Software Private Limited. This e-book is presented by STAG Software Private Limited www.stagsoftware.com 2
  • 3. Hypothesis Based Testing (HBT) is a scientific personal test methodology that is unique in its approach to ensuring cleanliness of software. It is a goal focused approach, commencing with setting up of cleanliness criteria, hypothesising potential defect types that can impede this, and then performing activities to ensure that testing is purposeful and therefore effective and efficient. The central theme HBT is constructing a hypotheses of potential defects that may be probable, and then scientifically proving that they do not indeed exist. The activities of testing like test strategy, test design, tooling & automating become purposeful as these are focused on uncovering the hypothesised defect types ensuring that these activities are done scientifically and in a disciplined manner. HBT is based on sound engineering principles geared to deliver the promise of guaranteeing cleanliness. Its core value proposition is about hypothesising potential defects that may be present in the software and then allow you to engineer a staged detection model to uncover the defects faster and cheaper that other typical test methodologies. HBT fits into any development methodology and weaves into your organisational test process. HBT is powered by STEMTM (STAG Test Engineering Method) a collection of EIGHT disciplines of thinking. STEM provides the foundation for scientific thinking to perform the various activities. It is personal scientific inquiry process that is assisted by techniques, principles and guidelines to decompose the problem, identify cleanliness criteria, hypothesise potential defect types, formulate test strategy, design test cases, identify metrics and build appropriate automation. 3
  • 4. Inspirations from nature HBT has been inspired by certain ideas and these are discussed below. The inspirations have come from “Properties of matter”, “Fractional distillation”, “Sherlock Holmes”, “Picture of baby growth”. Properties of matter Physical & Chemical properties of matter allow us to: ... classify “affected by” ... understand behaviours, interactions ... enable checking purity How can we use a similar train of thought to identify “properties of cleanliness” and then “types of defects”? “Properties of the system” End user expectations Cleanliness criteria Issues in specifications, structure, environment Potential Defect Types (PDT) and behaviour 4
  • 5. Inspirations from nature Fractional distillation This is a technique to separate mixtures that have components of different boiling points In software systems, there exists a variety of defect types that may be present in the system. How can we apply this thought process to optimally uncover the defects, by “fractionally distilling” them? Can we separate these types of defects on the basis of certain properties and optimally uncover the defects? From : http://withfriendship.com 5
  • 6. Inspirations from nature Picture of baby growth The picture shows the health of the foetus/baby . This picture shows size, shape, parts and types of issues not present Seeking inspiration, can we depict the health of software system in a similar manner? Can we measure the ‘intrinsic quality’ at a stage? As we progressively evaluate in a staged manner, certain types of defects detected & removed and therefore quality grows. Can we chart this as “cleanliness index”? Source :http://www.environment.ucla.edu/media/images/Fetal_dev5.jpg 6
  • 7. Inspirations Sherlock Holmes Sherlock Holmes was person who applied deductive logic to solve mysteries. How can we see inspirations from Holmes to hypothesise the types of defects that may be present and prove presence of these? 7
  • 8. HBT - A Personal Scientific Test Methodology Test methodologies focus on activities that are driven by a process which are powered by tools, yet successful outcomes still depend a lot on experience. Typically methodologies are at organisational level. On the other hand HBT is a personal scientific methodology enabled by STEMTM , a defect detection technology to deliver “Clean Software” 8
  • 9. Scientific approach to detecting defects Cleanliness criteria What is the end user expectation of “Good Quality”? Potential Defect Types What types of issues can result in poor quality? Evaluation Stage When should I uncover them? Test Types How do I uncover them? Test Techniques What techniques to generate test cases? Scenarios/Cases What are the test cases? Are they enough? Scripts How do I execute them? Metrics & Management How good is it? How am I doing? 9
  • 10. How is HBT different from other test methodologies? The typical test methodologies in vogue have relied on strength of the process and the capability of the individual to ensure high quality in the given cost and time constraints. They lack the scientific rigour to enable full cost optimisation and more often rely on automation as the means to driving down cost and cycle time. For example, they do not provide a strong basis for assessing the quality of test cases in terms of their defect finding potential and therefore improve effectiveness and efficiency. HBT on the other hand enables you to set a clear goal for cleanliness, derive potential types of defect and then devise a “good net” to ensure that these are caught as soon as they get injected. It is intensely goal-oriented and provides you with a clear set of milestones allowing you to manage the process quickly and effectively. Goal drives la T ic B yp H Activities T Activities defect detection ...................................... ...................................... Powered by Powered by technology experience (STEM) ...................................... ...................................... ...................................... ...................................... ...................................... ...................................... ...................................... ...................................... ...................................... ...................................... hopefully results in Goal 10
  • 11. Hypothesis Based Testing - HBT 2.0 A Quick Introduction Personal, scientific test methodology. SIX stage methodology powered by EIGHT disciplines of thinking (STEMTM). Setup Hypothesise Cleanliness Criteria Potential Defect Types SUT Nine Stage Cleanliness Assessment Defect Removal Filter 11
  • 12. A quick introduction to HBT SIX stages of DOING powered EIGHT disciplines of THINKING by D8 D1 S6 S1 Analysis & Business value Assess & Understand management understanding ANALYSE EXPECTATIONS D7 D2 Execution & Defect D8 D1 reporting STEM Core hypothesis Tooling D7 D2 S5 STEM Understand 32 core SUPPORT S2 D6 D3 CONTEXT concepts D5 D4 Strategy & Visibility planning D3 D6 Devise Formulate HYPOTHESIS Tooling Test design PROOF S4 S3 D5 D4 HBT powered STEM Personal test methodology by Defect detection technology 12
  • 13. D1 Business value understanding D2 Defect hypothesis D3 Test strategy & planning Landscaping Orthogonality principle Viewpoints EFF model Tooling needs assessment Reductionist principle Defect centricity principle Defect centred AB Interaction matrix Negative thinking Quality growth principle Operational profiling Orthogonality principle Techniques landscape Attribute analysis Defect typing Process landscape GQM D4 Test design D5 Tooling Reductionist principle Input granularity principle Automation complexity Box model assessment Behaviour-Stimuli approach 32 core Minimal babysitting principle Techniques landscape concepts Separation of concerns Complexity assessment Tooling needs analysis Operational profiling Test coverage evaluation Visibility Execution & Reporting Analysis & Management D6 D7 D8 GQM Contextual awareness Gating principle Quality quantification model Defect rating principle Cycle scoping 13
  • 14. Connecting HBT Stages to the Scientific approach to detecting defects S1 Cleanliness criteria Potential defect types S3 S2 Expectations Staged & purposeful detection S4 Complete test cases S6 Goal directed measures Sensible automation S5 14
  • 15. Clear baseline Set a clear goal for quality Cleanliness criteria Potential defect types Example: Clean Water implies S1, S2 1.Colourless Staged & purposeful 2.No suspended particles 3.No bacteria detection 4.Odourless Expectations What information(properties) can be used to identify this? Complete test cases ... Marketplace,Customers, End users ... Requirement(flows), Usage, Deployment ... Features, Attributes Goal directed ... Stage of development, Interactions Sensible automation measures ... Environment, Architecture ... Behaviour, Structure 15
  • 16. A goal focused approach to cleanliness Identify potential defect types that can impede cleanliness Cleanliness criteria Potential defect types S3 Example: Data validation Timeouts Staged & purposeful Resource leakage detection Calculation Expectations Storage Presentation Complete test cases Transactional ... Scientific approach to hypothesising defects is about looking at Goal directed FIVE Aspects - Data, Logic, Structure, Environment & Usage Sensible automation measures from THREE Views - Error injection, Fault proneness & Failure Use STEM core concepts > Negative thinking (Aspect) > EFF Model (View) “A Holmes-ian way of looking at properties of elements” 16
  • 17. Levels, Types & Techniques - STRATEGY NINE levels to Cleanliness Cleanliness criteria Potential defect types L9 End user value S4 Staged & purposeful L8 Clean Deployment detection Expectations L7 Attributes met Complete test cases L6 Environment cleanliness L5 Flow correctness Goal directed Sensible automation measures L4 Behaviour correctness L3 Structural integrity L3 Quality Levels Test Techniques (T1-T4) PDT7 L2 Input interface cleanliness TT5 PDT6 TT4 TT3 T3 L2 PDT5 L1 Input cleanliness PDT4 TT3 TT2 L1 PDT3 T2 PDT2 TT2 TT1 T1 PDT1 TT1 PDT: Potential Defect Types 17
  • 18. Countable test cases & Fault coverage Countable test cases & Fault coverage Use STEM Core concepts Cleanliness criteria Potential defect types > Box model > Behaviour Stimuli approach > Techniques landscape > Coverage evaluation Staged & purposeful detection to Expectations - Model behaviour S4 - Create behaviour scenarios Complete test cases - Create stimuli (test cases) Irrespective of who designs, #scenarios/cases shall be same - COUNTABLE Goal directed Sensible automation Test Scenarios/Cases measures R1 PDT1 TS TC1,2,3 R2 PDT2 TT R3 TS TC4,5,6,7 PDT3 Requirements & Fault traceability That test cases for a given requirement shall have the ability to detect specific types of defects FAULT COVERAGE 18
  • 19. Focused scenarios + Good Automation Architecture Level based test scenarios yield shorter scripts that are more flexible for change and easily maintainable. Cleanliness criteria Potential defect types L9 End user value Staged & purposeful detection L8 Clean Deployment Expectations L7 Attributes met Complete test cases L6 Environment cleanliness S5 L5 Flow correctness Goal directed Sensible automation measures L4 Behaviour correctness L3 Structural integrity L2 Input interface cleanliness L1 Input cleanliness 19
  • 20. “Cleanliness Index” - Improved visibility L4 Cleanliness criteria Potential defect types PDT10 TT8 PDT9 TT7 Staged & purposeful L3 PDT9 TT6 Cleanliness detection PDT8 TT5 Expectations PDT7 TT4 Complete test cases L2 PDT6 PDT5 TT3 L1 PDT4 S6 Goal directed PDT3 Sensible automation measures PDT2 TT2 PDT1 TT1 Quality report Stage CC1 CC2 CC3 CC4 R1 Met R2 Not met R3 “Growth of a baby” Partially met R4 R5 20
  • 21. HBT Stages Six stages to produce clean software 21
  • 22. Six staged methodology to produce clean software The act of validation in HBT consists of “SIX Stages of DOING”. It commences with first two stages focused on a scientific approach to understanding of the customer expectations and the the context of the software. One of the key outcomes of the first two stages is “Cleanliness Criteria” that gives a clear understanding of the expectation of quality. In the third stage, the Cleanliness Criteria and the information acquired in the first two stages are used to hypothesise potential types of defects that are probable in the software. The fourth stage consists of devising a proof to scientifically ensure that the hypothesised defects can be indeed be detected cost-efficiently. The fifth stage focuses on building the tooling support needed to execute the proof. The last stage is about executing the proof and assessing if the software does indeed meet the Cleanliness Criteria. Who are the customers, end users, what do they need, and S1 S6 S1 what do they expect? Assess & Understand ANALYSE What are the features of the system, what technologies are EXPECTATIONS S2 used, architecture? D8 D1 What types of defects may be present? D2 S3 Tooling D7 Understand What types of fishes to catch? S5 STEM SUPPORT S2 D3 CONTEXT D6 D5 D4 What is strategy, plan, test scenarios/cases? S4 Sherlock Holmes Devise Formulate PROOF HYPOTHESIS What tools do I need to detect the defects? S5 Boat in the fishing analogy S4 S3 How am I doing? How is quality? S6 Fisherman 22
  • 23. Stage #1 : Understand EXPECTATIONS The perception that end-users have of how well the product delivers the needs denotes the quality of the Understand the software/system. "Needs" represent the various features that the software/system needs to have, to allow the marketplace for end-user to fulfill his tasks effectively and efficiently. "Expectations" on the other hand represent how well the the system needs are fulfilled. The final software/system may be deployed in different marketplaces addressing the needs of various types of Understand the customers. Hence it is imperative that we understand the various target markets (i.e marketplace) where the technology(ies) used software or system will be deployed. There could be different types of customers in the marketplace. Hence it is necessary to identify the various types of customers and then finally identify various types of end-users present in the customer. What we have done now is to start from outward direction i.e marketplace and adopt a customer/end-user centric view to understand the needs and expectations. Understand deployment environment Once we have identified the various types of customers and the corresponding end-users, we can move on to understand the various technologies that make up the software or the system and also a deployment environment. The intent is to get a good appreciation of the "construction components" and the target environment of deployment. It is imperative that we should have a good understanding of the internal aspects Identify end user types and not merely the external aspects of the system. & #users for each type Now we're ready to go into a detailed analysis of the various types of end-users and the typical number of users for each of these end-users. Subsequent to this, we need to identify the various business requirements Identify business i.e. "needs" for each end-user. requirements for each user type At the end of the stage, the objective is to have a good understanding of the various end-users and their needs paving the way to understanding expectations clearly. 23
  • 24. Needs & Expectations NEEDS Customers in End users Should write Should have a eraser Kids Education EXPECTATIONS Should be attractive Seniors Should be non-toxic Lead should not break easily Product Artists Drawing e.g Pencil NEEDS Draftsmen Should write Should not need sharpening Management EXPECTATIONS Corporate Thickness should be consistent Variety of thickness should be Engineering available Variety of hardness should be Admin available Needs typically features that allow to get the job done. Expectations are how well the need is satisfied. Remember Functional & Non-functional requirements ? 24
  • 25. What does “understanding” involve? Good understanding of what is expected is key to effective testing. To accomplish this, it is imperative that we commence from understanding who the various types of end users, their requirements and subsequently the expectations that they have from these. Having a deep domain knowledge helps immensely. But what if I this is a domain that I am not very conversant with? Is there a scientific way to undertand? Understanding is a non-linear activity, it is about identifying the various elements and establishing connections between these. In the process of connecting the dots, missing information is identified, leading to intelligent questions. Seeking answers to these questions aids in deepening the understanding. These are some of the elements that need to be understood. Some of the information elements are “external to the system” i.e. marketplace, customer types, end users, business requirements while some are “internal to the system” i.e. features, architecture, technology etc. Stage #1 (Understand EXPECTATIONS) focuses on “external information while Stage #2 (Understand CONTEXT) focuses on “internal information”. “Good testing is about asking intelligent questions leading to deeper understanding.” 25
  • 26. Information extracted & artefacts generated Information At each stage, certain information is extracted, understood and transformed into artefacts useful to perform effective & efficient testing. Marketplace Customers Artefacts The key outcomes as demonstrated by the artefacts are: User types System overview ‣The big picture of the system ‣The various end users ascertained for Requirements different classes of customers in different HBT marketplaces Stage #1 User type list Deployment ‣A list of business requirements for each type of end user. environment. Requirement map Technology Lifecycle stage In Stage #1, the focus is on external information that relate to marketplace, #Users/type customers, end users andHBT Stage 1 business “Good understanding is key to effective requirements. This stage is useful to get the testing. Identifying who will use what is the bigger picture of the system and its potential beginning to become customer-focused” usage and the how it is deployed.
  • 27. Deliverables from Stage #1 Should contain a a good overview of the marketplace, the various types of customers, end-users types, System overview deployment environment and technologies that will be used to build the system. Should contain a list of the various types of users for different types of customers in various market User type list segments. Should contain a list of the business requirements and high level technical features mapped to the various Requirement map individual types. STEM Discipline applied in Stage #1 The STEM discipline “Business value understanding” of STEM is applied in this stage of HBT. The two STEM core concepts of “Landscaping” and “Viewpoints” are useful in this stage to scientifically understand the expectations. 27
  • 28. Stage #2 : Understand CONTEXT In this stage the objective is to understand the technical context in terms of the various features, their relative Identify technical business value, the profile of usage and ultimately arrive at the cleanliness criteria. Note that at this stage, we features and baseline are moving inward to get a better understanding of technical features of the system. them Having identified the various business requirements mapped by each type of end-user, the next logical step is to drill-down to the various technical features for each business requirement. It is important to understand Understand the various technical features that constitute the entire system do not really work in isolation. Therefore it is dependencies necessary to understand the interplay of the features i.e. understand the dependencies of a feature with other features. Understanding this dependency is very useful at later stages of the life cycle, particularly to regress Understand profile of optimally. usage We now have a list of requirements and the corresponding technical features mapped by each end-user. We are ready to proceed logically to understand the profile of usage of each of the features by the various end- Identify critical success users. To do this it is important to understand the typical in the maximum number of users for each user type factors and then the volume of usage by each user for every technical feature. Since we already have a mapping between the end-user type and the technical feature feature, all we have to do is to understand as to approximately how many times this feature will be used by typical end-user of that end user type. The intent Prioritize value of end of this is to gain a deeper understanding of the usage profile to enable an effective strategy formulation at the users(s) and features later stage of HBT. It is not only sufficient that the features work correctly, it is equally important that the various attributes of Ensure attributes are the nonfunctional aspects of the various features are indeed met. Typically nonfunctional aspects of the testable system are identified in the highest system level, and typically turn out to be fuzzy. Good testing demands that each requirement is indeed testable. In HBT, attributes are identified for each key feature and then aggregated to form the complete set of nonfunctional requirements. We will do this in two stages: firstly identifying the Setup cleanliness critical success factors for the technical features and thereof the business requirement and then detailing the criteria critical success factors to arrive at the nonfunctional requirements or attributes. Hence after figuring out the usage profile, identify the success factors for each business requirement. 28
  • 29. Stage #2 : Understand CONTEXT Good testing is not about testing all features equally, it is about learning to focus more on those requirements/features that affect the customer experience significantly. This does not imply that some requirements/features are less important than the others, it simply means that some requirements/features are more important . Before we start detailing the various attributes, it is worthwhile to the prioritize the various requirements /features and also various end-user types. To prioritize, start by prioritizing the various types of end users in terms of their importance to the successful deployment of the final system. Subsequently rank the importance of each of the requirement/feature for each of the end-user type. At the end of this exercise, we should have a very clear understanding of the business value of each requirement/feature. Note that the understanding of usage profile comes in very handy here. Now we are ready to derive the various attributes from the previously identified success factors and ensure that they are testable. A testable requirement simply means that it is an unambiguously possible to state whether it failed all passed after executing it. In the context of attributes, testability implies that each attribute does indeed have a clear measure/metric. Therefore it is necessary to identify the measures and the expected value of the measures for each of the attribute. Having identified the various technical features and the corresponding attributes, the usage profile in the ranking of the requirements/features, we are now set to identify the various criteria that constitute the cleanliness of the intended software. Cleanliness criteria in HBT represents testable expectations. Cleanliness criteria provides a very strong basis for ensuring a goal-focused testing. This allows one to identify potential types of defects and then formulate an effective strategy in the complete set of test cases It is important that the cleanliness criteria is not vague or fuzzy. 29
  • 30. Information extracted & artefacts generated At each stage, certain information is extracted, Artefacts understood and transformed into artefacts useful to perform effective & efficient testing. Information Feature list The key outcomes as demonstrated by the Features artefacts are: Value prioritization ‣A clear list of technical features matrix ‣Ranking of features to focus on high risk Usage areas ‣Profile of usage HBT Usage profile ‣List of attributes Focus areas Stage #2 ‣Feature interactions Attributes list ‣Clarity of expectations outlined as Attributes “Cleanliness criteria” Interaction matrix Interactions Cleanliness criteria In Stage #2, the focus is on internal information that relate to technical features, their interactions, focus areas, attributes, architecture, technology. 30
  • 31. Deliverables from Stage #2 Feature list Should contain the list of technical features, that forms the technical features baseline. Value prioritization Should contain a set of users, requirements and features ranked by importance. matrix Usage profile Should contain a the profile of various operations by various end users over time. Should contain the key attributes stated objectively i.e. state expected value for all the measures Attributes list for each attribute. Should contain the which feature affects what. Note that this should list the interactions and not the details Interaction matrix of interactions. The objective is to get a rapid understanding of the linkages. Cleanliness criteria Should contain criteria that need to be met to ensure that the deployed system is indeed clean. STEM Discipline applied in Stage #2 The STEM discipline “Business value understanding” of STEM is applied in this stage of HBT. The STEM core concepts of “Interaction matrix”, “Operational profiling”, “Attribute analysis” and “GQM” are useful in this stage to scientifically understand the context. 31
  • 32. Cleanliness criteria Cleanliness criteria is a mirror of expectations, The intention is to come up with criteria that if met will ensure that system meets the expectations of the the various end users. This is not be confused with “Acceptance criteria”, as “Acceptance criteria” is typically at a higher level. Acceptance criteria is typically “extrinsic” in nature i.e. it describes aspects like long duration running, migration of existing data, clean installation and running in the final deployment environment, delivering stated performance under real-life load conditions. Cleanliness criteria represents the “intrinsic quality” i.e. what properties should the final system have to ensure that it is deemed clean? Use the properties of the FIVE aspects of Data, Business logic, Structure, Environment, Usage as applied to your application to arrive at these criteria specific to your application. Note that the cleanliness criteria should both the the functional and non-functional requirements. The recommended style of writing Cleanliness criteria is: “That the system shall meet ....” Examples: That the system is able to handle large data (need to qualify large) That the system releases resources after use. That the system displays meaningful progress for long duration activities. That the system is able to detect inappropriate environment/configuration. 32
  • 33. Stage #3 : Formulate HYPOTHESIS Having understood the expectations and the context resulting in the formulation of cleanliness criteria, we are ready to hypothesize the potential defects that could affect the cleanliness criteria. This is one of the important stages of HBT resulting in a clear articulation of the various types of defects and forms the basis for the remaining stages of HBT. The key idea is to use the external information like the feature’s behaviour, environment, attributes, usage and internal information like construction material i.e technology, architecture to hypothesize the potential defects that may be present in the software under construction. Also note that the history of the previous versions of the software or similar systems can also be used to construct and strengthen the hypothesis. Having hypothesized the potential defects, it is possible to scientifically construct a validation strategy and design adequate test cases, thereby ensuring that the final system to be deployed is indeed clean. The FIVE key aspects useful for constructing hypotheses of defects are: data, business logic, structure, environment and usage. This HBT stage allows us to follow a structured &scientific approach to the hypothesize the potential effects ensuring that we do not miss any. 33
  • 34. Stage #3 : Formulate HYPOTHESIS (continued) Firstly use the external information like data specification and business logic Identify potential faults for the five specification to identify the potential defects. The information related to data that could aspects - Data, Business logic, Structure, help are: data type, boundaries, volumes, rate, format, data interrelationships. The intent Environment, Usage should be to get into a "negative mentality" and think of what can go wrong with respect to all the information related to the data and then produce a list of potential Identify potential failures of the five defects. aspects - Data, Business logic, Structure, Environment, Usage Now use the information related to the business logic to identify the potential effects. Business logic or the intended behaviour primarily transforms the various inputs i.e. input data to outputs that the user values. The intention is to identify potential Identify potential errors that could be transformation losses. The information specific to business logic that is useful for injected in the five aspects - Data, arriving at potential defects are : the various conditions and their linkages, values of Business logic, Structure, Environment, conditions, exception handling conditions, access control and dependencies on the Usage other parts of the software. Once again, the intent is to get into a "negative mentality", and identify erroneous business flows of logic. Now identify potential defects (PD) & Up to now the focus has been on using external information like the specification of combine PDs, remove duplicate PDs data and business logic to identify the potential defects. Now focus on the internal information like structure of the system and construction materials(i.e. language, technology) used to build the system to hypothesize potential defects. Structure at the Group similar PD to form Potential highest level represents the deployment architecture while structure at the lowest level Defect Types (PDT) represents the structure of the code. Some of the structural information that could be useful to hypothesize are: flow of control, resource usage, distributed architecture, interfacing techniques, exception handling, timing information, threading, layering. As Map PDTs to the elements-under-test i.e. features/requirements explained above, continue with the similar train of thought of examining these information with intent to identify potential problems in the structure. 34
  • 35. Stage #3 : Formulate HYPOTHESIS (continued) Having identified potential defects using the behavioural and structural information, examine information related to environment and how they can affect the deployed system. By environment, we mean the associated hardware and software on which the system is deployed and the hardware, software and application resources used by the system. The objective is to examine carefully how these can affect the finally deployed system. Some of the key information related to the environment that could be useful are: hardware/software versions, system access control, application configuration information, speed of hardware (CPU, memory, hard disk, communication links), environment configuration information (e.g. #handles, cache size etc), system resources (hardware, OS and other applications). Up till now we have taken a fault-centric approach of looking for potential faults (aka defects) by examining external or internal information. In addition to a fault-centric approach, we can also view the system from potential failure points and then identify the potential defects. Additionally, it is also possible to examine the system from an error injection point of view. That is, understand the kinds of potential errors that could be injected into the system to irritate the potential defects if any. The objective is to ensure that we have examined the system from all three views (error centric, fault centric & failure centric) and thereby ensure that we have not missed any potential defects. A failure centric approach demands that we wear an end-user hat and identify the potential failures that could cause business loss. The cleanliness criteria formulated earlier could come in very handy as this would force us to think like a customer/end-user. What we trying to do is to ensure that we have considered all the potential failures and therefore hypothesized the potential defects. Now move to a user centric view to examine the various ways that an end-user could abuse the system by identifying the various ways errors could be injected into the system. Not that an end user does not always connote a physical person, it could be another system that interacts with the system via some interface. so examine the various points of interaction and look at the possibilities of their injection and then hypothesize the potential defects that could get irritated by these errors. The kinds of information that could be useful here are: workflows, data access, interesting ways of using the system, accessibility, environmental constraints faced by the physical end-user and potential deviant ways of using the system. Then consolidate the potential defects and group similar ones into potential defect types (PDT). Finally map the PDTs to the various elements- under-test i.e. feature/requirements. Now we have a clear notion as to what types of defects that we should look forward to uncovering in what parts of the system. 35
  • 36. Information extracted & artefacts generated At each stage, certain information is extracted, understood and transformed into artefacts useful to perform effective & efficient testing. Information Data Structure Artefacts The key outcomes as demonstrated by the Environment artefacts are: HBT PD catalog Stage #3 ‣List of potential defect types Business logic ‣Mapping between PDTs & the elements- Fault traceability under-test i.e. Feature/Requirement matrix Usage Attributes Past defects In Stage #2, the focus is on hypothesizing PDTs using the FIVE aspects of Data, Business logic, Structure, Environment & Usage from THREE views - Error-centric, Fault-centric & Failure-centric. 36
  • 37. Deliverables from Stage #3 PD catalog Should contain the list of potential defects and the potential defect types Fault traceability Should contain the mapping between the potential defect types/potential defects and features/requirements. matrix STEM Discipline applied in Stage #3 The STEM discipline “Defect hypothesis” of STEM is applied in this stage of HBT. The STEM core concepts of “Negative thinking”, “EFF model”, “Defect centricity principle” and “Orthogonality principle” are useful in this stage to scientifically hypothesize defects. 37
  • 38. Stage #4 : Devise PROOF (Part #1: Test Strategy & Planning) HBT being a goal focused test methodology, the intent is about figuring out an optimal approach to detect the potential of defects in the system. Therefore strategy in HBT is about staging the order of defect detection, identifying tests that are needed to uncover the specific defect types and finally choosing test techniques best suited for each type of test. Typically we have always looked at the levels of testing like unit, integration and system from the aspect of the “size” of entity-under test. Unit test is typically understood as being done on the smallest component that can be independently tested. Integration test is typically understood as being done once the various units have been integrated. System test is typically seen as the last stage of validation and is done on the whole system. What is not very necessarily very clear is the specific types of defects that are expected to be uncovered by each of these test levels. In HBT, the focus shifts to specific types of defects to be detected, and therefore the act of detection is staged to ensure an efficient detection approach. In HBT, the notion is of quality levels, where each quality level represents a milestone towards meeting the final cleanliness criteria. In other words each quality level represents a step in the staircase of quality. The notion is to ensure that the defects that can be caught earlier is indeed caught. So the first step to formulation of strategy is to stage the potential defects and thereby formulating the various quality levels. However in HBT, there are NINE pre-defined quality levels where the lowest quality level focuses on input correctness progressively going onto the highest quality level to validate of the intended business value is indeed delivered. 38
  • 39. Stage #4 : Devise PROOF (Part #1: Test Strategy & Planning) Understand scope Having identified the various types of potential defect types to be detected at various levels, it is now necessary to understand the specific types of tests needed to uncover these potential defects. In HBT each test shall be intensely goal focused. This means that a type of test shall only uncover specific type of defects. Choose quality levels The act of test type identification results in specific types of tests to be done at each of the quality levels. Now that we know what types of defects need to be detected when and where what type of tests, we need Identify test types to know how to design sufficient yet adequate test cases for each type of test. In HBT, a test technique is one that allows us to design test cases. Based on the types of defects i.e. types of tests, we have to identify the test technique(s) that is best suited for uncovering these types of the defects. Identify test techniques Now we have a clearer idea of various types of defects, the levels of detection, types of tests and test techniques., we are now ready to identify the optimal detection process best suited for design/execution of Identify detection process test cases. The the act of identifying detection process also allows us to understand whether we need technology support to be able to execute test cases and therefore pave the way for automation strategy. Identify tooling needs At this point in time we have a strategy and are ready to develop the detailed test plan. Some of the key elements of the test plan is the estimation of effort and time and formulating the various test cycles. In HBT cycles are formulated first and then effort and time estimated. Formulate cycles Finally potential risks that could come in the way of executing the test plan are identified and the risk management plan put in place. Estimate effort In summary, a strategy in HBT is a clear articulation of the quality levels, test types test techniques and detection process model. Identify risks 39
  • 40. Information extracted & artefacts generated At each stage, certain information is extracted, understood and transformed into artefacts useful Information to perform effective & efficient testing. Cleanliness criteria PDT Artefacts Attributes The key outcomes as demonstrated by the HBT Test strategy artefacts are: Techniques Stage #4 ‣Test strategy ‣Test plan Deployment env. Test plan Scope of work #Scenarios Risks In Stage #4 (Part 1) the focus is on 1 HBT Stage identifying the quality levels, test types, test techniques and the detection process. 40
  • 41. Deliverables from Stage #4 (Part #1) Test strategy Should contain the quality levels, test types, test techniques & detection process Test plan Should contain the test effort estimate, cycle details and the potential risk & mitigation plan. STEM Discipline applied in Stage #4 (Part #1) The STEM discipline “Strategy & Planning” of STEM is applied in this stage of HBT. The STEM core concepts of “Orthogonality principle”, “Quality growth principle”, “Defect centered activity breakdown” , “Cycle scoping” are useful in this stage to scientifically developing the strategy & plan. 41
  • 42. Stage #4 : Devise PROOF (Part #2: Test Design) The act of designing test cases is a crucial activity in the test life cycle. Effective testing demands that the test cases possess the power to uncover the hypothesized potential defects. It is necessary that the test cases are adequate and also optimal. In HBT the design is done level-wise and within each level test-type wise. Based on the level & type, the test entity may be different. The test design activity for an entity for a type of test at a quality level consists of two major steps, firstly to design test scenarios and then generate these test cases for each scenario. Test scenarios are designed entity-wise and therefore there is a built-in notion of requirements traceability. In addition to requirements traceability, it is expected that the test scenarios and corresponding test cases are traced to the potential types of defects that they are expected to uncover. This is termed “Fault traceability”. 42
  • 43. Stage #4 : Devise PROOF (Part #2: Test Design) Identify test level to design The act of test design commences with the identification of the quality level and then the specific type consider & identify entities of test for which the test cases are to be designed. This allows us to identify the various test entities for which test cases have to be designed. Identify conditions & data Having identified the test entities it is then required to identify the conditions that govern the business logic and the data elements that drive these conditions. Subsequent to this, build the behavioral model. Use the behavioral model to generate test scenarios. Then for every scenario, identify the data that Model the intended behaviour varies and then generate values for each data element. Finally combine the data values to generate the semi-formally test cases. Since we have designed scenarios/cases entity-wise, requirements traceability is built-in i.e. the designed Generate the test scenarios scenarios/cases automatically trace to the entity (or requirement). Now map the scenarios/cases to the hypothesized PDTs to build the fault traceability matrix. For each scenario, generate Finally assess the test adequacy of the designed scenarios/cases by checking test breadth, depth & test cases porosity. Trace scenarios to PDT & entity-under -test Assess the test adequacy by fault coverage analysis 43
  • 44. Information extracted & artefacts generated At each stage, certain information is extracted, understood and transformed into artefacts useful Information to perform effective & efficient testing. Conditions Artefacts Data Test scenarios & Logic cases The key outcomes as demonstrated by the HBT artefacts are: Stage #4 Requirements ‣Test scenarios & cases Structure ‣Requirements traceability matrix traceability matrix ‣Fault traceability matrix PDT Fault traceability matrix Defect escapes Attributes In Stage #4 (Part 2), the focus is on designing test scenarios/cases that can be proved to be adequate and have the power to uncover the hypothesized PDTs. 44
  • 45. Deliverables from Stage #4 (Part #2) Test scenarios & Should contain the test scenarios/cases for each entity for all types of tests at various quality levels cases Requirements Should contain the mapping between the scenarios/cases and the entity-under-test traceability matrix Fault traceability Should contain the mapping between the scenarios/cases and the PDTs matrix STEM Discipline applied in Stage #4 (Part #2) The STEM discipline “Test design” of STEM is applied in this stage of HBT. The STEM core concepts of Reductionist principle, Input granularity principle, Box model , Behavior-Stimuli approach, Techniques landscape, Complexity assessment,Operational profiling, Test coverage evaluation are useful in design test scenarios/cases scientifically. 45
  • 46. Stage #4 : Devise PROOF (Part #3: Metrics Design) In this stage, the objective is to design measurements to manage the process of validation in an Identify progress aspects effective and efficient manner. Since HBT is a good focused test methodology, it is necessary to device measurements that enable us to clearly show the progress towards this goal. Identify adequacy(coverage) The measurements in HBT are categorized into progress related measures, test effectiveness aspects measures and system risk measures. Therefore it is necessary to identity the various aspects related to progress, effectiveness and the system health. Identify progress aspects Once the aspects are identified, key goals related to these are identified and then the metrics formulated. Finally it is necessary to understand when to measure and how to measure. For each of the aspects identify the intended goal to meet For each of these goals, identify questions to ask To answer these questions, identify metrics Identify when you want to measure and how to measure 46
  • 47. Information extracted & artefacts generated At each stage, certain information is extracted, understood and transformed into artefacts useful to perform effective & efficient testing. Information Quality aspects Progress aspects Artefacts HBT The key outcomes as demonstrated by the Process aspects Stage #4 artefacts are: Metrics chart ‣Chart of metrics that are goal-focused Organization goals When & how to measure In Stage #4 (Part 3), the focus is on designing metrics that will ensure that we stay on course towards the goal. 47
  • 48. Deliverables from Stage #4 (Part #3) Metrics chart Should contain the list of metrics, collection frequency and a how this meets the goal. STEM Discipline applied in Stage #4 (Part #3) The STEM discipline “Visibility” of STEM is applied in this stage of HBT. The STEM core concepts of GQM, Quality quantification model are useful in design metrics that are goal-focused. 48
  • 49. Stage #5 : TOOLING support Perform tooling benefit In this stage, the objective is to analyze the support that we need from tooling/technology to analysis perform the tests. Automation does always imply scripting that is typically automating the designed scenarios. It could also involve development of test bench, custom tooling to enable the system to be tested. Identify automation scope This stage of HBT allows you to identify the tooling needs, understand issues/complexity involved, perform cost-benefit analysis, evaluate existing tools for suitability/fitment and finally Assess automation complexity devising a good architecture that provides for flexibility/maintainability before embarking onto automation. Identify the order in which scenarios need to be automated Evaluate tools Design automation architecture Develop scripts Debug and baseline scripts 49
  • 50. Information extracted & artefacts generated At each stage, certain information is extracted, Information understood and transformed into artefacts useful Artifacts to perform effective & efficient testing. Automation Needs & benefits objectives document Scope Complexity assessment report The key outcomes as demonstrated by the Scenarios to artefacts are: automate Tooling requirements ‣The reason for tooling & automation HBT ‣Challenges involved Stage #5 ‣Requirements of tooling Scenario fitness ‣Scope of tooling & automation Automation scope ‣Architecture of automation Technologies used ‣Automated scripts Automation architecture Tool info. Tooling & Scripts Complexity info. In Stage #5, the focus is on identifying tooling requirements and building automated scripts that is delivers value & ROI. 50
  • 51. Deliverables from Stage #5 Needs & benefits Should contain the technical & business need for automation. document Complexity Should contain the technical challenges of automation assessment report Tooling requirements Should contain the requirements expected out of automation Automation scope Should contain scope of automation Automation Should contain the architecture adopted to building tooling/scripts architecture Tooling & Scripts The actual tools/scripts for performing automated testing STEM Discipline applied in Stage #5 The STEM discipline “Tooling” of STEM is applied in this stage of HBT. The STEM core concepts of Automation complexity assessment, Minimal babysitting principle, Separation of concerns, Tooling needs analysis are useful in adopting a disciplined approach to tooling & automation and deliver the ROI.. 51
  • 52. Stage #6 : Assess & ANALYZE Identify test cases/scripts This stage is where you execute the test cases, record defects, report to the team and take to be executed appropriate action to ensure that the system/application is delivered on time with the requisite quality. Execute test cases, record outcomes Record defects Record learnings from the activity and the context Record status of execution Analyze execution progress Quantify quality and identify risk to delivery Update strategy, plan, scenarios, cases/scripts 52
  • 53. Information extracted & artefacts generated At each stage, certain information is extracted, Artifacts understood and transformed into artefacts useful to perform effective & efficient testing. Execution status report Information Defect report The key outcomes as demonstrated by the artefacts are: Execution ‣Report of test execution & progress Progress report ‣Defect report information HBT ‣Report on cleanliness aka quality Defect Stage #6 ‣Learnings from execution resulting in information Cleanliness report improved strategy, scenarios & cases ‣Any other key learnings Context Updated strategy, plan, scenarios & cases Key learnings In Stage #6, the focus is on ensuring a disciplined execution, intelligent analysis and continuous learning to ensure that the goal is reached. 53
  • 54. Deliverables from Stage #6 Execution status Should contain the status of test execution report Defect report Should contain defect information Progress report Should contain progress of execution and thereof the cycle Cleanliness report Should contain the cleanliness index and how well the cleanliness criteria has been met Updated strategy, Updated strategy, plan, scenarios, cases based on learnings from execution plan, scenarios & cases Key learnings Key observations/learnings that could be useful in the future STEM Discipline applied in Stage #6 The STEM disciplines of “Execution & reporting” and “Analysis and management” of STEM are applied in this stage of HBT. The STEM core concepts of Contextual awareness, Defect rating principle, Gating principle, Cycle scoping enable a disciplined execution, fosters continual learning and stay focused on the goal. 54
  • 56. Discipline #1 : Business value understanding How to This discipline enables one to understand the system, create a baseline of features, attributes and Understand a system finally expectations. This discipline consists of SEVEN tools, each of which uses certain STEM core concepts to ensure these are done in a scientific and disciplined manner. Landscaping | Viewpoints Good quality implies meeting expectations. This requires that we understand expectations in How to additions to the needs as delivered by the requirements. Understanding the intended business Create a functional baseline value to delivered is key to this. Viewpoints | Reductionist principle How to Create an attribute baseline Viewpoints | Reductionist principle How to How to Identify focus areas Understand interdependencies Value prioritisation | Viewpoints Interaction matrix How to How to Understand usage Baseline expectations Operational profiling | Viewpoints Goal-Question-Metric | Viewpoints 56
  • 57. Baseline provides the basis for future work What is to be tested needs to be clear. Remember Functional & Non-functional requirements? Functional Baseline Attribute Baseline Consists of list of features to be tested. The non-functional aspects. Essentially a agreed upon list of features. Agreed upon attributes & their values. 57
  • 58. Tools in D1 -Business value understanding STEM Core Tools Description Concepts System is viewed as a collection of information elements that are How to Landscaping interconnected. This tool enables you to come up with intelligent questions to Understand a system Viewpoints understand the various information elements and their interconnections. Commencing from an external view of end users, various use cases How to Viewpoints (requirements) are identified and then technical features that constitute the Create a functional baseline Reductionist principle use cases. This tool enables you to clearly setup a functional baseline that is used as a basis for strategy, plan, design, tooling, reporting & management. In addition to functional correctness, it is imperative that the attributes are How to Attribute analysis met,. This tool enables you to identify the attributes and ensure that these are Create an attribute baseline Viewpoints testable. All requirements/features are not equally valued by the end users. This tool How to Viewpoints allows you to rank the end users, requirements, features thereby enabling Identify focus areas Value prioritisation prioritisation of testing based on the risk and perceived value. Understanding the real life usage profile is about knowing what operations, How to Viewpoints #concurrent operations, rate of arrival are in progress at a point in time. This Understand usage Operational profiling tool allows to arrive at the closer to reality potential usage profile of the system to ensure effective non-functional tests. Understanding how a feature/requirement affects/is-dependent on other How to Interaction matrix feature/requirements is useful to understand impact & re-testing effort. This Understand interdependencies tool allows you to rapidly understand the interdependencies. How to Viewpoints This tool allows to derive cleanliness criteria that reflect the expectations. Baseline expectations Goal-Question-Metric 58
  • 59. Customers & End Users Customers in End users Kids Education Seniors Product Artists Drawing e.g Pencil Draftsmen Management Corporate Engineering Admin A product or an application may be sold in different market places made up of different kinds of customers. Each class of customer may have different types of end users who use the product. It is important to understand that each end user may have different needs & expectations. Testing is about ensuring that the product will indeed satisfy the variety of needs & expectations 59
  • 60. Needs & Expectations NEEDS Customers in End users Should write Should have a eraser Kids Education EXPECTATIONS Should be attractive Seniors Should be non-toxic Lead should not break easily Product Artists Drawing e.g Pencil NEEDS Draftsmen Should write Should not need sharpening Management EXPECTATIONS Corporate Thickness should be consistent Variety of thickness should be Engineering available Variety of hardness should be Admin available Needs typically features that allow to get the job done. Expectations are how well the need is satisfied. Remember Functional & Non-functional requirements ? 60
  • 61. Customer Profile Customer #1 Customer #2 Customer #3 Customer #4 Different customers have different types of end users, and differing number of users for type of end user. 61
  • 62. Customer Profile & Usage How many What does each one use? What types users What is order of importance? of users What is the usage frequency? F1 F2 F3 F4 System F5 F6 F7 F8 Different end users may use the system differently in terms of what they use, frequency of usage and how they value each each feature. 62
  • 63. Business Value Ultimately end users need the system to do their job BETTER, FASTER, CHEAPER and deliver value to their customers. Understand that it is about “business value” of system - how does the system help my business to do BETTER, FASTER, CHEAPER. 63
  • 64. Discipline #2 : Defect hypothesis 64
  • 65. Discipline #2 : Defect hypothesis How to This discipline enables one to hypothesise potential defect types that may be present in the system Hypothesise defects under test and setup a clear goal approach to detection/prevention. Goal focused approach implies that we map the hypothesised potential defect types (PDT) to the elements-under-test i.e feature/ Negative thinking | EFF model | requirements. Defect centricity principle This discipline consists of TWO tools, each of which uses certain STEM core concepts to ensure these are done in a scientific and disciplined manner. How to Setup goal-focus Hypothesis is done by scientifically examining certain properties of the system and can be complemented by ones experience. Orthogonality principle 65
  • 66. Tools in D2 - Defect hypothesis Tools STEM Core Concepts Description Hypothesis is done by examining properties of the system in a scientific manner. Examining properties of elements that make up the Negative thinking system from five aspects (data, business logic, structure, environment How to EFF model and usage) from three views (error-injection, fault-proneness and Hypothesise defects Defect centricity principle failure) allows you to scientifically come with potential defects. Subsequently grouping similar potential defects, we arrive at potential defect types (PDT). Mapping the PDT to the elements of the system enables you to be How to Orthogonality principle clear as to what type of defect you want to in each element enabling Setup goal-focus you to be goal-focused. 66
  • 67. “Properties of the system” End user expectations Cleanliness criteria “affected by” Issues in specifications, Potential Defect Types (PDT) structure, environment and behaviour 67
  • 68. “Properties of the system” Expectations Cleanliness criteria Needs Features “impedes” Environment Behavior Structure Potential Defect Types (PDT) Material Expectations delivered by Needs (Requirements) via Features that display Behavior constructed from Materials in accordance to a Structure in a given Environment 68
  • 69. Setting up a Clear Goal Before we invest effort in devising a test strategy, plan & test cases,let us be clear about the goal... What types of defects are we looking for? D8 D1 S6 S1 Analysis & Business value management understanding Assess & Understand ANALYSE EXPECTATIONS D7 D2 Execution & Defect reporting 32 core hypothesis D8 D1 concepts D2 Strategy & Tooling D7 Visibility S5 Understand planning SUPPORT STEM CONTEXT S2 D6 D3 D3 D6 Tooling Test design D5 D4 D5 D4 Devise Formulate PROOF HYPOTHESIS What types of defects may be present? S4 S3 i.e. what types of fishes to catch 69
  • 70. Potential Defect Types Functional CLEANLINESS CLEAN Entity implies + Attribute CLEANLINESS What types of defects will affect my 1. Functional behavior 2. Attributes affects Potential Defect Types(PDT) Cleanliness criteria 70
  • 71. Potential Defect (PD) & Potential Defect Type (PDT) We may come up with a variety of potential defects for an entity-under-test. A set of similar potential defects (PD) may be grouped into class of defects i.e Potential Defect Type (PDT). The intent is to create a set of smaller set of classes of defects to uncover. Example: PDT1 PDT1 : User Interface Issues PD1: Spelling mistakes in UI PD1 PD2 PD2: UI elements not aligned PD3 PD3: UI standards violated 71
  • 72. Information used for hypothesis used to hypothesise Intended Functionality Attributes Expectations Potential Defect Types Defect history Personal experience 72
  • 73. Aspects used to hypothesise Data The two broad areas of validation for any entity-under-test are : ‣Functionality ‣Attributes uses Our objective is to ensure that the functional aspects of the system are correct and that they meet the expected attributes. used by So, how can we hypothesise potential defect types for a given entity- Usage Business Logic under-test? In this discipline of HBT, we decompose the entity into FIVE built using elemental aspects that are: ‣Data ‣Business logic ‣Structure Structure ‣Environment ‣Usage uses, lives in i.e. A feature is used by end user(s) and implements the behavior via business logic that is built using structural materials that uses resources from the environment. Environment 73
  • 74. Views on these Aspects Each “Aspect” can be viewed from THREE angles. Error injection What errors can we inject? ERROR irritates FAULT Fault proneness What inherent faults can we “irritate”? FAULT propagates resulting in FAILURE Failure What failures may be caused? 74
  • 75. Aspects & Views Combined Error injection Fault proneness Failure What kinds of erroneous data What kind of issues could data What kinds of bad data can be Data may be injected? cause? generated? What conditions/values can be How can conditions be messed What can be incorrect results Business Logic missed? up? when conditions are combined? How can we setup incorrect How can structure mess up the What kinds of structure can Structure “structure”? behavior? yield incorrect results? What is incorrect environment How can resources in the How can environment be Environment setup? environment cause problems? messed up? In what ways can we use the What kinds of usage may be be What can be poor usage Usage entity interestingly? inherently faulty? experience? 75
  • 76. Generalised PDTs for “Data” Aspect 76
  • 77. Generalised PDTs for “Business Logic” Aspect 77
  • 78. Generalised PDTs for “Structure” Aspect 78
  • 79. Generalised PDTs for “Environment” Aspect 79
  • 80. Generalised PDTs for “Usage” Aspect 80
  • 81. TWO Important Core Concepts used in Defect Hypothesis Negative Thinking EFF (Error-Fault-Failure) model ASPECT oriented approach View oriented approach. Data Error injection uses used by Business Usage Logic Fault proneness built using Structure Failure uses, lives in Environment In real life usage, we combine both of these. 81
  • 82. How to write PDTs “Language shapes the way we think.” Hence it is necessary to have a simple and structured approach to documenting the PDTs identified. When writing PDTs, commence the sentence with “That the system/entity may/may-not....” Write this in defect oriented form. Write each PDT as a sentence. Do not be verbose. e.g. That the system may accept data out of bounds. That the system may leak resources. 82
  • 83. Discipline #3 : Strategy & Planning 83
  • 84. Discipline #3 : Strategy & Planning How to This discipline enables to adopt a structured and disciplined approach to formulating a goal-focused Identify scope strategy, estimating effort and then formulating a plan. In HBT strategy is defined a clear combination of what to test, when to test, how to design scenarios for test and finally test. This is Cycle scoping defining the scope of test, types of test, quality levels, test techniques for design and what tooling support is need to execute the strategy. How to Formulate strategy This discipline consists of SIX tools, each of which uses certain STEM core concepts to ensure these are done in a scientific and disciplined manner. Orthogonality principle Quality growth principle Process landscape Techniques landscape How to How to Formulate cycles Estimate effort Cycle scoping Defect centred activity breakdown Quality growth principle Approximation principle How to How to Assess tooling support Setup criteria Tooling need analysis Gating principle 84
  • 85. Tools in D3 - Strategy & Planning Tools STEM Core Concepts Description The focus of this tool is allow you to clearly identify the scope of testing that How to Cycle scoping is expected of you by identifying the types of tests i.e. PDTs that you are Identify scope expected to uncover. Orthogonality principle Strategy is about identifying levels of quality, types of tests, test techniques for How to Quality growth principle ensuring adequacy and the mode of execution of cases. This tool enables you Formulate strategy Process landscape to approach a disciplined approach to developing a goal-focused strategy that Techniques landscape will be effective & efficient Leveraging technology to develop custom tooling and automating scenarios is How to key to improving efficiency and effectiveness.This tool enables you to clearly Tooling need analysis Assess tooling support identify the tooling & scripting requirements to you leverage your investment in tooling & automation. Using PDTs as the basis, this tool enables a logical way to estimate effort. Having identified PDTs and mapping this to the elements-under-test and Defect centred activity How to identifying the types of test to uncover these and deriving #cycles of test by breakdown Estimate effort scoping out cycles, this tool proceeds to estimate the effort for each element- Approximation principle under-test for each type of test for every cycle and sums these to arrive at the potential total effort. Formulating cycles requires a clear focus of the scope of every cycle. This tool How to Cycle scoping enables you to be clear as to what PDTs you plan to uncover at different Formulate cycles Quality growth principle points in time of the development in line ensuring that the quality growth is in accordance with the quality levels. Effective & efficient testing implies that good defects are indeed are found the How to Gating principle right stages of software development. This tool enables setting criteria for Setup criteria each stage of development and release. 85
  • 86. Es C ti m l ea at rp io l n, an o In f fra Sch st ed acti o Planning ru u ct ling n ur e wo rk En su r i Au Te ng h to s i m t te gh c at Design io chn ove n ar ique rage Strategy should help in ch s ite ct ur e C os te W f ha C fect ti y ive s m cle e an plan xec ua n u Execution l/a ing tion ut om at ed ? M St et ay ric in H s- go ow W n t to hat rack in te & w rp re hen Assessment t 86
  • 87. Contents of a test strategy Features to focus on List down major features of the product. Rate importance of each feature(Importance = Usage frequency x Failure criticality). Potential issues to uncover Identify the PDTs that you look forward to detecting. Quality Levels Identify the levels of quality that are applicable and map the PDTs to these levels. Tests & Techniques State the various tests that need to be done to uncover the above PDTs. Identify the test techniques that may be used for designing effective test cases. Execution approach Outline what tests will be done manually/automated. Outline tools that may be used for automated testing. Test metrics to collect & analyse Identify measurements that help analyse the strategy is working effectively. 87
  • 88. Goal-focused strategy L4 ... WHAT PDTs to be uncovered WHEN (Quality Levels) and PDT10 TT8 HOW(Test Types)? PDT9 TT7 Key tests L3 PDT9 TT6 L9 End user value Cleanliness End to End Flow test PDT8 TT5 PDT7 L8 Clean Deployment SI, Migration, Compatibility TT4 L2 PDT6 L7 Attributes met LSPS, Security, Usability, PDT5 Reliability, Volume TT3 L1 PDT4 L6 Environment cleanliness “Good citizen” test PDT3 TT2 PDT2 L5 Flow correctness Flow correctness test PDT1 TT1 L4 Behaviour correctness Functionality, Data integrity Stage L3 Structural integrity Structure test In HBT, there exists NINE quality L2 Input interface cleanliness UI test, Usability levels, with certain PDTs to be uncovered at each level. L1 Input cleanliness Data validation test 88
  • 89. Discipline #4 : Test design How to This discipline enables one come with scenarios/cases that can be proven to be adequate. Design Model behaviour of scenarios/cases uses a model based approach with some of a tool to enable to help you build the behavioural model and subsequently generating test scenarios/cases from the model ensuring Box model these are “countable” (i.e can be proved to be sufficient) and traced to faults (i.e. has the power Techniques landscape the uncover the hypothesised defects). Operational Profiling This discipline consists of THREE tools, each of which uses certain STEM core concepts to ensure these are done in a scientific and disciplined manner. How to This tools in this discipline pay a lot attention of the form& structure of test cases and these Design scenarios & cases conform to the HBT test case architecture. The structure of test cases is seen as crucial to ensure adequacy and ensure optimality.. Behaviour-Stimuli approach Techniques landscape Input granularity principle How to Ensure adequacy Complexity assessment Coverage evaluation 89
  • 90. Tools in D4 - Test design Tools STEM Core Concepts Description This tool enables you to understand the intended behaviour of the element- Box model under-test and create a behaviour model to ensure that the scenarios & cases How to Techniques landscape subsequently designed are indeed complete. This commences by identifying Model behaviour Operational Profiling conditions that govern behaviour and the data elements that drive the conditions. This tool enables you to design scenarios & cases that can be proved to be adequate. Scenario in HBT is a path or flow of a given behaviour while a test How to Behaviour-Stimuli approach case is a combination of data (stimuli) that makes the system take that path. Design scenarios & Techniques landscape The focus is ensuring that number of scenarios can be proven to be cases Input granularity principle “countable”( i.e. no-more or no-less) and therefore the test cases too are indeed countable. This tool enables you to ensure the designed scenarios/cases are indeed adequate. Tracing scenarios/cases to PDTs enables “Fault coverage” i.e. How to Complexity assessment ensuring the PDTs hypothesised can indeed be in covered. In conjunction Ensure adequacy Coverage evaluation with “Countability”, the adequacy can indeed be proved in a logical manner. This tool can be also be review/assess completeness/adequacy of existing scenarios/cases. 90
  • 91. Objective of test design Test design is a key activity for effective testing. This activity produces test scenarios/cases. The objective is to come with complete yet optimal number of scenarios/cases that have the power to uncover good defects. Do we have a net that is broad, deep, strong, with small enough holes to catch the fishes that matter? 91
  • 92. Effecting testing is the outcome of good test cases. Therefore the design of test cases plays a crucial role to deliver clean software. Based on the fishing analogy. test cases is the “net” to catch the “fishes” (defects) and it is necessary that the net needs to be broad, deep, strong with a fine mesh. In HBT, the test design activity is done quality level-wise and within each level stage-wise. At each level it is done in two stages - design test scenarios first and then test cases. Test scenarios are designed into entity-wise and therefore there is a built-in notion of requirements traceability. In addition to requirements traceability, it is expected that the test scenarios and the corresponding test cases are indeed traced to the potential types of defects that they are expected to uncover. This is termed as fault traceability. The act of test design commences with the identification of the test level and then the specific type of test for which the test cases are to be designed. This allows us to identify the various test entities for which test cases have to be designed. Having identified the test entities it is then required to partition the problem into two parts: firstly to understand the behaviour (business logic) and then to understand the various data elements for the business logic. This allows us to identify the various conditions in the business logic and allow us to model the behaviour more formally. The behaviour model is used to generate test scenarios. Then for every given scenario, we have to understand the data elements that vary and then come up with the optimal number of values for each data element. The various values of each data element are then combined to generate the test cases. Note that only external specification and therefore black box techniques have been used until now to design the scenarios and cases. It is equally necessary to use the structural information of the entity in the test to refine the scenarios and test cases. Finally we have to trace the scenarios and the corresponding test cases to the potential defects that have been hypothesized for the entity under test for the given test type. This allows us to ensure that the test cases do indeed have the power to uncover the hypothesized defects and thereby ensure that the test cases are indeed adequate. The final step involves assessment of the test breadth, depth porosity and thereby be sure the test cases are indeed adequate. 92
  • 93. Approach to test design Remember the NINE quality levels.. L9 End user value The test scenarios/cases are designed level-wise. Note that the entity to be tested at each level may be different. For example at the higher levels, L8 Clean Deployment the entities to be tested are requirements/business-flows, whereas at lowers levels, it may be screens/APIs etc. L7 Attributes met At each level the approach to test design is: .. design test scenarios first and then L6 Environment cleanliness .. come up with test cases L5 Flow correctness L4 Behavior correctness L3 Structural integrity L2 Input interface cleanliness L1 Input cleanliness 93
  • 94. What is a Test Scenario & Test Case? When we test, our objective is to check that the intended behaviour is what is implemented. What do we need to do? For an entity under test, we need to come up with various potential behaviors and check each one of these. That is we need up a set of scenarios to evaluate the behaviours. Test Scenario reflects a behavior and is the path from the beginning to end. How do we check a behavior? We do this stimulating the behavior with a combination of inputs and check the outputs. Test Case is a combination of inputs to stimulate the behavior. Positive/Negative test scenarios/cases Positive scenario is the expected behavior of the entity under test. Negative scenario is behavior that is not expected of the entity under test. Test cases that are part of positive scenario are positive test cases. Test cases that are part of negative scenario are negative test cases. 94
  • 95. Hierarchical test design For each entity under test, generate test scenarios first, and then test cases. This is Hierarchical Test Design. Combination of the CONDITIONS result in Test Scenarios Business Logic Inputs Is a collection of Outputs conditions Combination of INPUTS result in Test cases 95
  • 96. Information needed for design Key tests/ Information needed End to End Flow test L9 End user value End user scenarios of usage, End user expectations L8 Clean Deployment SI, Migration, Compatibility Environment (HW, SW, versions), Data volumes/formats, LSPS, Security, Usability,Reliability, Volume L7 Attributes met Usage profile, data sizes, access controls, security aspects and other attribute information as applicable L6 Environment cleanliness “Good citizen” test Environment dependencies & Resource usage info Flow correctness test L5 Flow correctness Behavioral (conditions) & Data specification Functionality, Data integrity L4 Behavior correctness Behavioral (conditions) & Data specification Structure test L3 Structural integrity Information about architecture & code structure UI test, Usability L2 Input interface cleanliness Interface information and User information L1 Input cleanliness Data validation test Data specification info needed 96
  • 97. What to do when requisite information is missing/not-available? When analyzing a specification, look for the conditions that govern the behavior (business logic) and the data. It is quite possible that all the conditions may not be clearly listed or the values for the conditions are not clearly stated. What is to be done in such cases? It is a cardinal sin to ignore missing conditions! It is imperative that you identify the list of conditions and values that they take. In the case, these are not available, question! The true value of effective testing lies in uncovering the missing information. Note that you have in effect uncovered issues in specification, which is great. 97
  • 98. How do we know that test scenarios/cases are adequate? 1. Test Scenarios/Cases shall be COUNTABLE. That is, the number of test scenarios/cases designed shall be proven to no more or no less. This can only be done (a) if the behavior is modeled and scenarios generated and (b) values for test inputs generated and combined formally. 2. There shall exist scenarios/cases for each requirement/feature REQUIREMENTS TRACEABILITY. 3. Each type of defect (PDT) hypothesized for every requirement/feature shall traced to scenarios/cases. FAULT TRACEABILITY 4. At the lower level, scenarios/cases shall cover all the code (statements or conditions or multiple-conditions or paths) CODE COVERAGE Countable Scenarios/Cases Feature = Business Logic + Data Business logic is implemented as a set of conditions that have to be met For a given test entity, do we clearly understand ‘all the conditions’ that govern the behavior. Have all ‘effective’ combinations been combined to generate the test scenarios? Do we clearly understand the specification of each test input (data)? Have we generated all the values for each input? Have we combined these values optimally? 98
  • 99. Requirements traceability Requirement traceability is about ensuring that each requirement does indeed have test case(s). So after R1 TC1 we design test cases, we map test cases to requirements to ensure that all the requirements are indeed being validated. This is typically used as a measure of test adequacy. R2 TC2 Let us consider a situation wherein there is exactly one test case for each requirement. Now are the test cases adequate? No! Requirement traceability is a necessary condition for test adequacy but not R3 TC3 sufficient. ... ... Also understand that the expectation of a requirement is not merely about functional correctness, it is also expected that certain attributes i.e. non-functional aspects have to be also met. So non-functional test cases need to be traced too. Rm TCi Every test case is mapped to a requirement. or Every requirement does indeed have a test case 99
  • 100. Fault traceability PDT1 R1 Having hypothesized the PDTs (Potential Defect Types) in Stage #3, the natural thing to do would be to map these to the Requirement (or entity-under-test). This is accomplished as part of Stage #4 to PDT2 R2 develop the test strategy. Continuing further in Stage #4 the specification of the Requirement is used to design test scenarios PDT3 R3 and cases. Note that by in this approach, test cases are automatically traced to Requirements. ... ... Given that the Requirement could have the PDTs that have been mapped earlier, let us map the designed test cases to the PDTs. The intent of this is to ensure that the designed test cases do have the power to uncover the hypothesized defects. PDTi Rm Mapping the PDTs to each Requirement and its associated Test cases is termed Fault Traceability in HBT. TC1 PDT1 Fault Traceability in conjunction with Requirements Traceability makes the condition for test adequacy Necessary and Sufficient TC2 PDT2 TC3 PDT3 ... ... TCn PDTi 100
  • 101. Fault traceability + Requirements traceability Requirements traceability is “Necessary but not sufficient” Fault Fault traceability traceability Assume that each requirement had just one test case. This implies that we have satisfied the required traceability objective. PD1 R1 TC1 PD1 What we do know is that could there additional test cases for some of the requirements? PD2 R2 TC2 PD2 So requirements traceability is a necessary condition, not a sufficient PD3 R3 TC3 PD3 condition. ... ... ... ... So, what does it take to be sufficient? PDn Rm TCi PDn If we had a clear notion of types of defects that could affect the customer experience and then mapped these to test cases, we have Requirements Fault Traceability). This allows us to be sure that our test cases can traceability indeed detect those defects that will impact customer experience. 101
  • 102. Test design documentation Useful to clarify intent/ setup goal Test objective Questions: Useful to setup test environment Prerequisites What is the value of each of these information? Test data combination i.e. How useful are they? Expected results What do these various pieces of Test steps Useful to detect defects information help in? Useful in manual execution and assist in automating scripting 102
  • 103. Syntax of test case documentation Test objective Describe the test objective in natural language. Prerequisites Describe the prerequisites in natural language. Test scenario description Write this as a ‘one-sentence beginning with “Ensure that system does/does-not...” Test cases For each scenario list the test cases as a table show below. Test steps/procedure Describe the procedure for execution as a series of steps. 1 .... 2 .... Note: Be as terse as possible and yet be clear. The intent should be think more rather than document more. Also terseness forces clarity to emerge. 103
  • 104. HBT Test Case Architecture Organized by Quality levels sub-ordered by items (features/modules..), segregated by type, ranked by importance/priority, sub-divided into conformance(+) and robustness(-), classified by early (smoke)/late-stage evaluation, tagged by evaluation frequency, linked by optimal execution order, classified by execution mode (manual/automated) A well architected set of test cases is like a effective bait that can ‘attract defects’ in the system. In HBT, we pay attention to the form and structure of the test cases in addition to the content. The form and structure as suggested by the HBT test case architecture also enables existing test cases to be analyzed for effectiveness/adequacy. This can be done by “flowing the existing test cases” into the “mould of HBT test case architecture”. 104
  • 105. Discipline #5 : Tooling How to Tooling and automation is not simply developing code, it requires a clear analysis and design to Analyse tooling needs ensure that the tooling/automation is flexible enough to keep up with the changes of the system and that it delivers value. This discipline enables you to analysis the tooling needs in a rational Tooling needs analysis manner to ensuring that investment in tooling is not wasted and that the subsequent scripts do Automation complexity analysis allow up improve efficiency and effectiveness. This discipline consists of TWO tools, each of which uses certain STEM core concepts to ensure these are done in a scientific and disciplined manner. How to Good scripting Separation of concerns Minimal babysitting principle 105
  • 106. Tools in D5 - Tooling Tools STEM Core Concepts Description How to Tooling needs analysis This tool enables you to understand what parts of testing needs support of Analyse tooling needs Automation complexity analysis technology in terms of tooling/automation. A script once developed has to be in sync with the application/system and hence requires continuous maintenance. Also a script when run may encounter situations that cause to stop or seek user guidance for How to Separation of concerns continuance. This tool enables you to develop good scripts by ensuring a clear Good scripting Minimal babysitting principle separation of data and code and design of “execution run flow” (i.e what script needs to be executed in case this fails) to ensure that the automated run is maximised (i.e. as much of scripts are indeed run). 106
  • 107. Discipline #6 : Visibility How to This discipline enables one to “quantify quality” to enables goal-focused approach to management. Measure quality The focus of this discipline is to setup a model for measuring quality and also devise measures that are purposeful and goal-focused. Quality quantification model This discipline consists of TWO tools, each of which uses certain STEM core concepts to ensure these are done in a scientific and disciplined manner. How to Devise measures Goal-Question-Metric Metrics landscape 107
  • 108. Tools in D6 - Visibility Tools STEM Core Concepts Description This tool enables you to set a model to measure the “intrinsic” quality by How to enabling to use the “cleanliness criteria” to give a objective picture of the Quality quantification model Measure quality system quality. This also allows you to come with “cleanliness index” to quantify quality. This technique ensures that that you design measures that are goal-focused. How to Goal-Question-Metric rather than setting my measures and then analyzing them, this tool helps you Devise measures Metrics landscape articulate a goal and then derive appropriate measures. 108
  • 109. Discipline #7 : Execution and reporting How to This discipline enables one to ensure that the reporting of information during testing conveys the Good defect reporting information that enables purposeful actions to be executed. Defect rating principle This discipline consists of TWO tools, each of which uses certain STEM core concepts to ensure these are done in a scientific and disciplined manner. How to Learn & Improve Contextual awareness 109
  • 110. Tools in D7 - Execution and reporting Tools STEM Core Concepts Description This tool helps you report the outcomes of testing i.e. defects in a clear How to Defect rating principle manner to enable (1) clear understanding problem (2) enable clear resolution Good defect reporting (3) provide learning opportunities for improvement The act of a-priori plan/design is useful to the way, but learning from act of How to testing by being understanding the context is very essential to effective Contextual awareness Learn & Improve testing. This tool is about sensitizing you this so that the test artefacts are continually enhanced with learnings from testing. 110
  • 111. Discipline #8 : Management How to This discipline takes a “earned value approach” to management (i.e. goal focused). The focus is on Goal focused management using the cleanliness criteria and index as the basis to ascertaining where we are with respect to quality in comparison with where we-should-have-been and then ascertaining risks related to Quality quantification model quality & release to enable rational & clear management. Gating principle This discipline consists of ONE tool, that uses certain STEM core concepts to ensure these are done in a scientific and disciplined manner. 111
  • 112. Tools in D8 - Management Tools STEM Core Concepts Description The tool uses the cleanliness criteria and index to understand as where you are with respect of the goal. Remember that we commenced with setting up How to Quality quantification model cleanliness criteria and quality levels. This tool adopts a “earned value Goal focused Gating principle approach to quality” by enabling you to assess as where you in the quality management level and helps you compare as to where you should be, helping you clearly understand the gaps and enable you to manage rationally/objectively. 112
  • 114. A guideline that lists the set of test design Techniques Landscape techniques based on method of examination, design stage and the type of defect. Black Box Techniques Scenario design Data value Test case generation This is a guideline that lists the various test techniques to allow Functional test generation Exhaustive you to choose the appropriate ones. Decision table Boundary value Single fault Flowchart analysis At least once The techniques are classified into two categories. The first State machine Equivalence Orthogonal array categorisation is based on the type of information used to partitioning (Pair-wise design i.e. external information i.e. Black box and Internal NFT (LSPS) Special value combination) information i.e. white box. The second categorisation is based Operational profiling Error based on test design outcome - (1) Those are useful for designing test scenarios (2) Those useful to create various test data (3) Those that are useful to combine the test data optimally yet being White Box Techniques effective. Control flow based Data flow based Cyclomatic complexity Data flow (def-use) Statement coverage Resource based Decision coverage Resource leak Multiple condition coverage Path coverage 114
  • 115. A technique to rapidly understand the system Landscaping by examining the various elements and the connections between them. This technique inspired by Mindmapping enables one to meaningful questions in a systematic manner to understand the needs & expectations. It is based on the simple principle: “Good questions matter more than the answers. Even if questions do not yield answers, it is fine, as it is even more important to know what you do not know.” The premise is that understanding about SIXTEEN key information elements and their connections enables one to understand the expectations & system. The act of seeking information results in questions that aid in understanding. 115
  • 116. Typical questions generated by Landscaping (1/2) Marketplace What marketplace is my system addressing? Why am I building this application? What problem is attempting to solve? What are the success factors? Customer type Are there different categories of customers in each marketplace? How do I classify them? How are their needs different/unique? End user (Actor) Who are the various types of end users (actors) in each type of customer? What is the typical/max. number of end-users for each type? Note: An end user is not necessarily a physical end user, a better word is ‘actor’ Requirement What does each end user want? What are the business use cases for each type of end user? (Use case) How important is this to an end user - what is the ranking of a requirement/feature? Attributes What attributes are key for a feature/requirement to be successful (for an end user of each type of customer)? How can I quantify the attribute i.e. make it testable? Feature What are the (technical) features that make up a requirement (use-case)? What is the ranking of these? What attributes are key for a successful feature implementation? How may a feature/requirement affect other feature(s)/requirement(s)? 116
  • 117. Typical questions generated by Landscaping (2/2) Deployment environment What does the deployment environment/architecture look like? What are the various HW/SW that make up the environment? Is my application co-located with other applications? What other softwares does my application connect/inter-operate with? What information do I have to migrate from existing system(s)? Volume, Types etc. Technology What technologies may/are used in my application? Languages, components, services... Architecture How does the application structure look like? What is the application architecture? Usage profile Who uses what? How many times does a end user use per unit time? i.e. #/time At what rate do they use a feature/requirement? Are there different modes of usage (end of day, end of month) and what is the profile of usage in each of these modes? What is the volume of data that the application should support? Behavior conditions What are the conditions that govern the behavior of each requirement/feature? How is each condition met - what data (& value)drives each condition? 117
  • 118. See the system from various end users’ point of view to identify the Viewpoints needs & expectations to set a clear baseline. Good testing requires that the tester evaluate the system from the end use angle i.e. put oneself in the end-user’s shoes. This is easier said than done. Viewpoints is a technique that enables this. This states that for each type of user: m 1.Has different expectations from the system Syste 2.Uses different features due to differing needs 3.Values different attributes 4.Views importance of a feature differently 5.Uses features at different frequency/rate 6.Has different expectations on quality One the various types of users are identified, this technique is useful in digging deeper to get a clear handle on needs & expectations. 118
  • 119. Manage complexity by decomposing the Reductionist principle information to smaller elements. Customer End User Reductionism means reduction, simplification. The objective of this principle is to breakdown an aspect into smaller parts until it is System Functionality understood clearly. The intent is to gain crystal-clear clarity to enable a job to be performed well. Requirement #1 Feature #1 This principle can be applied at various phases of evaluation to Requirement #2 understand various aspects: Feature #2 Phase Application of this principle Requirement Attributes Break down the system needs into into use cases, Product then features. Functionality Measure #1 understanding Break down the requirements into functional and non-functional aspects. Attributes Measure #2 Break down the entity under test into business Test design logic and data components to design functional test scenarios/cases. Engineering Complexity Break down complexity into functional, structural, assessment data and attribute complexity. Feature Effort Break down large activities into smaller fine-grained estimation activities so that effort can be estimated precisely. Business logic Data 119
  • 120. Reductionist principle (continued) The principle is to break down anything into a smallest component. The intention is to gain a better understanding. So, if you are trying to understand a product,decompose the product into requirements (aka use-cases). Subsequently decompose each use- case into various constituent features. Decompose the given requirement/feature into functional and non-functional aspects. Decompose the functional aspect of a feature into business logic and data. In the case of estimation, decompose the act of validation into large grained test life-cycle activities and then break each of these into smaller grained activities. To understand the complexity, decompose the complexity into functional behavior complexity, structural complexity (how complicated are the innards), attribute complexity (what aspects of non-functional behavior are challenging) and data complexity ( size/volume and data inter- relationship). 120
  • 121. Understand the interrelations of the elements - Interaction matrix requirements, features. A system is not a mere collection of distinct features; it is the interplay of the various features that produces value. But this also has an important side-effect, the various features may affect each other in F1 F2 F3 F4 a negative fashion. A highly interacting set of features make the system complex. F1 X X This technique allows us to understand the potential interactions among the features/requirements. F2 X Modifying a feature may therefore may result in an unwarranted side effect. This technique helps to understand the interaction of the various features of the software and therefore hypothesize the F3 X potential unwanted side-effects and therefore formulate an effective strategy of evaluation.It is useful to the inter-relationships quickly initially, rather than elaborate the semantics of the interaction. The F4 X semantics of interaction may be deferred to a point when detailed analysis of a change needs to be done. Understanding the linkages is also useful to appreciate potential side-effects that may affect soem of the key attributes.This is useful in understanding the system complexity and to enable effective strategy formulation and later at optimization of regression tests. 121
  • 122. A technique to identify attributes expected of the system Attribute analysis and ensure that they are testable. It is not only sufficient that each feature is functionally clean, it is equally important that the associated attributes be also met. The challenging aspect of the attributes that they could be typically fuzzy. Good testing implies that attributes be testable. This implies that each attribute have a clear measure or metric. For example if performance is one such attribute, it is necessary to understand the performance metric for a feature at the worst case be t<=T, T being the expected performance metric. Rather than commence with identifying attributes for the whole system, identify attributes for each Attribute Metric requirement and then combine these to arrive at system-wide attributes. For each requirement, list the “critical-to-great-experience” attributes. If it is any easier to do this at the level of features, then do so. i.e. A1 a1 identify key attributes for each feature and then arrive at the attributes at the requirement level. Use a standard attribute list like ISO 9126 to ensure that no attributes are missed out. What we have now, is a list F1 A2 b1 of attributes for each requirement. A3 c Attribute What A2 b2 Once the attributes for each requirement or feature A1 F1(a1),F3(a2) F2 have been identified, now group the common attributes A4 d to formulate the system-wide attributes. A2 F1(b1),F2(b2), F3(a3) A1 a2 This enables better clarity of as to what each attribute A3 F1(c) F3 really means ensuring that the attributes or non- A2 b3 functional requirements are indeed testable. A4 F2(d) 122
  • 123. Attribute analysis (continued) It is quite possible that the attributes are descriptive and therefore hazy/fuzzy. It is now important to ensure that every attribute is testable. As a first step identify key characteristic(s) for each attribute. For each characteristic, identify possible measures so that we may come up with a number/metric to ensure clarity. Now identify a measure for that characteristic and then identify the value expected for this metric. 1. Identify these based on users Attribute 2. Identify these based on usage patterns Characteristic(s) 3.Based on (2) derive technical measures Measure(s) 4. Now connect (3) to (2) ensuring that these reflect expectations that are testable Expected value(s) The benefits of application of this technique are: 1.That we do focus on the non-functional aspects of the system 2.That the non-functional requirements are indeed testable. 3.That we are able to come with up with good questions to extract/clarify non-functional requirements when they are not/ill-stated. 123
  • 124. A technique to prioritise the elements to be validated to Value prioritisation enable effective and effective testing. A typical system consists of multiple use cases(requirements) that are used by different types of users in differing frequencies. The business importance of each use case is different and the same is true of the different user types. Since testing is about reducing the business risk to acceptable levels, and accomplishing the same in optimal effort/cost, we need to understand the business importance and criticality of users, use cases and the associated features. This technique enables a logical analysis of prioritisation of value so that test effort is targeted on the right aspects. Application Identify the various types of users. For each type of type of users, identify the typical number of users for each user type. If the number of users for an user type is large, we may conclude that this user type is indeed important. However, just because the number of users for a given user type is low, we cannot necessary conclude that this user type is not as important. It is important to understand how important this user is to successful deployment of the system i.e. Understanding the impact of this user type’s expectations is not met.Now combine the number of users for a user type and the business impact of this user to successful deployment and arrive at the priority of a user type. Do this for all the user types. In addition to user type prioritisation based, it is necessary that we understand the importance of what a user type does. i.e. what requirements (use cases/business flows) are most/more important. Here again we can apply the same logic that we we applied for each user type. That is, understand the frequency of usage and the business impact of a incorrectly implemented requirement. Hence it is important to understand what types of users use the requirement and how many times they use it in a given span of time. Applying the same logic that low frequency usage may not necessarily indicate that it is a less important requirement, as that requirement may cause severe business loss if it did not work correctly, despite being used infrequently. To arrive at the prioritisation of a requirement, one can breakdown the requirement into its constituent technical features and perform a similar analysis if it is easier to analyse this from the lower level technical requirements. The end point of application of this STEM core concept results in a rational way to arrive at prioritisation of features, requirements and user types. Benefits This allows us to develop a test strategy that can indeed can focus on the key aspects more, utilising the effort, time and cost effectively and efficiently. Understanding prioritisation allows us to set the priority of test scenarios/cases to ‣enable optimal regression ‣enable choosing the key test cases to execute in case of constrained time ‣enable correct severity rating of defects e.g. failure of important test cases could result in high severity defects. 124
  • 125. Value prioritisation (continued) User #Users Bus. Criticality Type Understand the business value of the features and their priorities. Effective testing is about reducing business risk to acceptable levels. UT1 n1 V V High This technique helps you rank the various end users, use cases/ features. UT2 n2 High UT3 n3 V High Req./Feature Usage Impact freq. Need Must-have, Could-have, Nice-to-have R1(F1-F3) n1 V V High R2(F2-F4) n2 High Frequency Heavy, Moderate, Light R3(F4-F6) n3 V High Loss outcome Huge, Moderate, Acceptable 125
  • 126. A technique to identify the usage patterns and Operational profiling hence the load profile. Understanding the rate and number of transactions probably on a real system is critical to ensure that the system is designed well and later sized and deployed well. Good understanding of the business domain is seen as a key enabler to arrive at the usage profile. Operational profiling is technique that enables one to scientifically arrive at a real life profile of usage. Good understanding of this concept alleviates the problem of lack of deep domain knowledge to understand the usage profile. This core concept consists of these key aspects: 1. Mode – Represents a time period of usage e.g. End of month, where the usage patterns are distinctive and different. 2. Key operations (features/requirements) used 3. Types of end users associated with the key features/requirements 4. Number of end users for each type of users 5. Rate of arrival of transactions In short, for a given mode, identify the end users types and their key operations and then identify the number of users for each type of user and then identify the rate of arrival of transaction. Employing this core concept allows us to think better and ask specific questions to understand the marketplace and the usage profile in a typical and worst-case scenario. The operational profile is extremely useful for creating test scenarios for load, stress, performance, scalability and reliability tests. So, the profiling would consist of identifying various actors, the various use-cases these actors use the frequency (rate), at which they use and understand the no. of operations that they would do in different time periods. 126
  • 127. Operational profiling (continued) Time UT1 O1 User type #Users Operation t1 t2 t3 t4 O2 n1 O1 50 20 30 20 UT1 n2 O2 25 0 15 10 O3 UT2 n3 O3 100 50 15 0 O4 Software/ UT2 n4 O4 0 35 35 50 O5 System UT3 O6 200 O7 150 O8 100 1. Identify the key operations of the system 50 2. Connect the user types & operations i.e. what operations are used by which user types 0 t3 t4 t1 t2 3. For each user type list out the typical & maximum number of users 4. Identify modes of usage e.g. different times of day/week/month/year O1 O2 O3 O4 5. For each mode, approximate the number of operations in a given time period for each user type 6. Finally approximate the rate of arrival of the operations. NOTE: 1. An user need not be a physical user, it could be another system 127
  • 128. GQM A technique to ensure that the goal Goal-Question-Metric (cleanliness criteria) is indeed testable. A technique that helps you to set clear goals. Metrics may be viewed as milestone markers towards the goal. Collecting metrics is easy, the hard part is “how is it useful in helping me reach my goal?” 1. Identify goal (s) first 2. Come up with questions to understand distance from the goal 3. To answer these questions objectively, identify objective measures Goal Vague cleanliness criteria are useless. This technique enables you derive cleanliness criteria that is clear by forcing you identify : Q1 Q2 1. What is cleanliness? (Goal) 2. How do you ascertain the cleanliness? (Question) 3. Ensure that this is less subjective i.e. via objective measure (Metric) M1 M2 M3 M4 128
  • 129. A technique to identify potential defect types based Negative thinking on “Aspects” of a system The objective is to identify potential defects in the entity under test in a scientific manner by adopting a fault centric approach. The intent is to think ‘negatively’ on various aspects and thereby identify potential defects in the entity under test. Any entity under test, processes data according to certain business logic, is built using structural components, that uses resources from the environment, and is ultimately used by certain classes of end users. To hypothesise potential defects in a entity-under-test, the above generalisation can be applied in a scientific manner. Aspect Generalized PDTs Structure Consuming dynamic resources and not releasing them Aspect Generalized PDTs Error/exceptions not handled well or ignored Data Violation of type specification Synchronization issues, deadlock issues, race conditions Incorrect format of data (data layout, fixed vs. Blocking leading to “hanging” when dependent code does not variable length) return Large volume of data Environment Potential defects pertaining to environment may be : High rate of data arrival Improper configuration of settings in environment Duplication of data that is meant to be unique Non-availability of resources Business Missing conditions & values that govern the Incorrect versions of dependent sub-systems/components logic business logic Slow connections Conflicting conditions Incorrect handling of erroneous paths Usage Wrong sequencing of usage Impact on attributes e.g. performance, scalability, Improper disconnects/aborts reliability, security etc. High rate of usage Transaction related issues i.e. multiple operations Large usage volume need to complete, else none should be performed Unauthorized usage i.e. violation of access control Difficult to use i.e. not very intuitive 129
  • 130. A technique to identify potential defect types based Negative thinking on “Aspects” of a system The objective is to identify potential defects in the entity under test in a scientific Aspects manner by adopting a fault centric approach. Data This technique decomposes an entity into FIVE elemental aspects that are: ‣Data ‣Business logic Business Logic ‣Structure ‣Environment ‣Usage Data Structure The intent is to think ‘negatively’ on these FIVE aspects and thereby identify uses Environment potential defects in the entity under test. used by Business Usage Usage Logic built using Any entity under test, processes data according to certain business logic, is built using structural Structure components, that uses resources from the environment, and is ultimately used by certain classes uses, lives in of end users. Environment 130
  • 131. Generalised PDTs for “Data” Aspect 131
  • 132. Generalised PDTs for “Business Logic” Aspect 132
  • 133. Generalised PDTs for “Structure” Aspect 133
  • 134. Generalised PDTs for “Environment” Aspect 134
  • 135. Generalised PDTs for “Usage” Aspect 135
  • 136. A principle to group similar Defect centricity principle defects into defect types System Levels PDTs A principle to group similar potential defects into potential defect types(PDT). The intent is to create a manageable list of PDTs. 136
  • 137. EFF Model A technique to identify potential defect types by seeing the (Error-Fault-Failure) system from on different “Views” Errors injected into the system irritate faults causing them to propagate and result in failures. Failure is what customer observes. High impact failures are the result of severe faults. EFF enables failure-centric and error-injection-centric thinking to identify potential defects, complementing the fault-centric thinking. Each “Aspect” can be viewed from THREE angles. Error injection What errors can we inject? ERROR irritates FAULT Fault proneness What inherent faults can we “irritate”? FAULT propagates resulting in FAILURE Failure What failures may be caused? 137
  • 138. A principle that clearly delineates quality Orthogonality Principle levels, test types and test techniques. This principle states that to uncover a defect optimally, you need to identify the earliest stage of detection (i.e. Quality level) and identify the specific type of test and use the most appropriate test techniques (i.e. bait) to ensure that the scenarios & cases are adequate. Defect This allows us to understand the ‣earliest point of detection ‣type of test needed & ‣effective test technique Type i.e. Given a potential defect: 1. What is the earliest point of detection? 2. What type of test needs to be done? Stage/Level 3. What test techniques would be most suitable? ue iq Identifying the levels, the corresponding test types and hn techniques is what constitutes a strategy. c Te 138
  • 139. A principle to setup progressively improving Quality growth principle levels of quality/cleanliness for an entity under test. QL4 PDT10 Staging quality growth via levels enables clarity of PDT9 defect detection - “what to detect when”. QL3 PDT9 Reaching the “pinnacle of excellence” is like climbing the staircase of quality. PDT8 This also allows us to objectively measure quality. Cleanliness PDT7 QL2 PDT6 PDT5 QL1 PDT4 PDT3 PDT2 PDT1 Stage 139
  • 140. A guideline that lists the various “process” Process landscape models for test design and execution. Process models 1. Disciplined /Structured Design first 2. Ad-hoc/Random/Creative On the fly design 3. Contextual Context based design 4. Historical Past issues based design 5. Experiential Domain based design Defect type Process model DT1 1,4 DT2 2,3 DT3 4 Process model employed must be based on the type of defect to be uncovered. Certain types of defects are best discovered using a disciplined approach while some may rely on the individual’s creativity at the time of testing. Some of the these may rely on pure domain experience while some may better uncovered by a careful analysis of past history of issues and some of them need an good understanding of the context of deployment and usage. 140
  • 141. A technique to analyse the needs of tooling & Tooling needs analysis automation. Tooling needs can be in Test Scenarios Execution tooling needs type Tooling for automating testing costs 1. Structure analysis money. It is necessary therefore ensure to TT1 Manually A.TS1 Not only be be sure of the purpose or the 2. Installation TT2 TT1 manually objective to be achieved. ... 3. Setup/configuration Nice to TT3 This technique enables analysing the B.TS1 automate 4. Data creation tooling needs as to what is to be TT2 ... automated and the reason/benefits. 5. Test execution Guiding aspects to C.TS1 automation 6. Outcome assessment TT3 1. Frequent basic tests ... 7. Behaviour probing 2. Regression oriented 3. Time consuming 4. Effort consuming 5. Requires high skills A technique to analyse tooling needs in a disciplined manner. 1.First analyse what aspect of test life-cycle needs tooling help. 2.Later analyse what scenarios cannot be executed manually at all. 3.Identify what of those scenarios that can be executed manually would be nice to automate based on suggested parameters i.e. Guiding aspects to automation.. 141
  • 142. A technique to setup goal focused test cycles Cycle scoping with clear scope for each cycle. Test cycle is the point of time wherein the build is validated. It takes multiple test cycles Cycle#1 C1 C2 C4 to validate a product. Each test cycle should have a clear scope. Scope of testing in a cycle is “what needs to be tested and what What features? F1, F2 F3,F4 F1,F2, aspect of cleanliness needs to be evaluated”. { F1, F2,…,Fn} F3,F4 The scope of a cycle in HBT is a Cartesian Scope Scope product of the Features (or Entities) and the x Types of tests to be executed. T1 T1,T2 T1,T2 Test types {T1, T2,…,Tn} T3 Scope = {Features} x {Test types} i.e What features will be tested, what tests will QL3 be done is the scope of a cycle. QL2 QL1 In short the focus of each cycle is uncover certain PDTs enabling a monotonic quality growth in line with the intended quality levels. 142
  • 143. Defect centred A technique to estimate test effort by identifying the various activities required to uncover the potential activity breakdown defects in the entity under test. CC PDT QL TT TS Activities Flows QL4 TT5 Design Features QL3 TT4 TT5 Document Screens QL2 TT3 Automate Components QL1 TT1 TT2 Execute Estimate effort based on the PDT that have to be uncovered in the various ‘elements’ of the software at different stages. Identify PDTs to be uncovered, stage them, identify tests, breakdown the each test into various activities, estimate effort at leaf level and then sum them. 143
  • 144. Defect centred activity breakdown (continued) #Elements, #Hrs/wk, #Cycles #Defects #TS #Cycles Understand Execute Log defects Manage Design & Documentation. Review Depends on mode of “doing” Common test cases – Checklist Static /dynamic Automate For a given level, estimate effort based on #BasicElements, #TS, #Cycles, #Defects 144
  • 145. Approximation principle A principle to aid in scientific approximation. The measure whose value is to be approximated is based on a set of parameters each having a varying sensitivity to the outcome, with a formula that binds these. The value of the parameters needs to be hypothesised, if sensitive, needs to be tested and then the formula applied. Iterate based on learning and potential estimated variation. 1.Identify the key parameters 2.Work out the formula 3.Understand which of these parameters are ‘sensitive’ i.e. a small variation can affect outcome grossly 4.Check if the parameters can be broken down further until their values can be estimated correctly 5.Now estimate the value of the parameters 5.1.Guess/Hypothesise based on best judgment 5.2.Test the hypothesis and correct same to a value closer to reality 6.Apply the formula and compute the value 7.Iterate based on learning gleaned out of this approximation cycle/estimated potential variation 145
  • 146. A technique to rapidly understand the intended functional Box model behaviour of an entity under test by identifying the conditions and then the data and business logic(condition sequencing). Given an entity to be tested, understand the intended behaviour rapidly to generate the behaviour model. Description of business logic 1. Identify the conditions that govern the behaviour first. I1 O1 2. Then identify the data elements that drive the conditions. 3. Finally identify the sequencing of conditions as a flow to understand the business logic (or behaviour) I2 O2 The focus is to extract the conditions and identify the data elements to enable construction of a behaviour model and also to discover unstated/missing behaviour. 146
  • 147. Behaviour-Stimuli (BEST) A technique to design test scenarios and cases ensuing sufficient yet optimal and purposeful Approach test cases Testing is about injecting a variety of stimuli and assessing the behaviour by observing the actual with the expected result. Firstly identify behaviours to be validated and then generate stimuli. A behaviour is denoted by a test scenario while test cases represent stimuli. This is a hierarchical approach to test design, this enables clarity, coverage and optimality. O1 Entity under test I1 TS #1 … TS #1 O2 TC #1 I2 TC#2 O3 TC#3 147
  • 148. A principle to identify the data element Input granularity principle (s) for an entity under test and their specification. The notion of what an input is and therefore its specification is based on the level of testing. The input specification at a lower level is ‘fine’ whereas at higher levels, it is ‘coarse’. Fine implies basic data types, whereas ‘coarse’ implies complex/aggregate data types. Understanding this is key to generating test cases appropriate to the level of testing. Coarse Fine 148
  • 149. A technique to understand an entity’s Complexity assessment complexity to identity suitable test techniques. Systems that are complex, demand to be tested more carefully. Complexity Some systems are business logic wise complex i.e. too many conditions and combinations, while some systems are structurally complex. Behavioural complexity Also in certain system the attributes may be demanding and therefore Business logic complexity the complexity may in attributes. Data complexity Complexity can be broken into 1. Functional complexity Attribute complexity 2. Structural complexity 3. Attribute complexity Structural complexity If (1) is complex, black box techniques are useful Logic complexity If (2) is complex white box techniques is useful If (3) is complex, judicious mix of (1) and (2) is necessary Resource complexity 149
  • 150. Coverage evaluation A technique to assess test case adequacy. Adequacy of test cases is key to clean software. This principle helps in understanding the test breadth, depth and porosity of test case. Breadth relates to the various types of tests to uncover the different types of defects. Depth relates to the various levels of tests to ensure that defects at all levels can be uncovered. Porosity is whether test case is a clear combination of data or not. Additionally it is necessary to understand the conformance and defect orientation of test cases. Test breadth Breadth Types of tests Depth Quality levels Porosity Test case “fine-ness” Test depth Test porosity 150
  • 151. Automation complexity A technique to analyse complexity of tooling/ automation analysis The complexity of a script and therefore the effort required to design and code the scripts depends on various parameters. A script consists of sections of code to setup the condition for test, drive the test, compare the outcome, log information and finally cleanup. The complexity of the script therefore may be decomposed into individual section complexities and analysed. Setup Complexity depends on #steps, data, inter-relationship Driver Complexity depends on length of flow (#steps), error-recovery complexity Complexity depends on #comparisons and type of comparison (course versus fine) and whether it is deterministic or Oracle non-deterministic Log Complexity depends on #log points and log information detAILS Cleanup Complexity depends on #steps, data inter-relationship 151
  • 152. A principle to ensure unattended Minimal babysitting principle automated test runs. Test script #1 When automated tests are run, some of the scripts may fail and abort the entire test Test script #2 cycle. To utilise automation most effectively and increase test efficiencies, it is necessary Test script #3 to maximise the test run. i.e. as many scripts that can be run must be executed. Test script #4 This principle states that the test scripts must be designed in a such a manner that … ‘baby sitting’ i.e. restarting the test run manually must be minimal. Test script #N 152
  • 153. Separation of concerns A principle to ensure delineation of code & data in automation to facilitate robust and principle maintainable automation.. Code A script consists of code and data that it uses to drive the system-under- test. The basic attribute of a good script is its ability to be flexible with Common code minimal changes for adaptation. Hence it is necessary that a script does not contain hard-coded data. The data in a script pertains to configuration/setup and the actual test data. This Specific code principle states that there must be a clean separation of the code and data aspects of the script. Data Setup/Config. information Test data 153
  • 154. A technique to “quantify quality” in Quality quantification model alignment with the cleanliness criteria and quality levels. Quantify software quality to allow for better decision making. Software is invisible and quality is the invisible aspect of this invisible. This technique enables you to setup an objective measurement system for measuring the quality of software. Rate each cleanliness criteria Represent these as a Kiviat chart Area under a chart for a cycle represent the “Quality Index”. 154
  • 155. A guideline to designing goal oriented metrics Metrics landscape to rationally assess quality, delivery risk and test effectiveness To know where we are, how we are doing, it is necessary to have a have a beacon to light up the way to ensure good visibility. “Good goal oriented metrics is that beacon”. Quality Progress Example : Process Effectiveness: Test breadth, depth, Defect escapes Metrics +:- ratio, Coverage Efficiency: Blockers Productivity: #TC executed/designed Risk Process 155
  • 156. A principle to rate defect severity and Defect rating principle priority. Defects are rated by Severity and Priority. Severity of a defect is decided on by the impact of the defect on the customer. Priority of a defect is decided by the risk posed to timely release. Severity Serious impact implies HIGH severity System Customer Dev team Priority “Business risk” “Release risk” Blocker implies HIGH priority decides ‘Severity’ decides ‘Priority’ 156
  • 157. A principle to learn from context to enable Contextual awareness better understanding and increase test effectiveness. Good testing requires keen observation skills and a sharp ‘ear to the ground’. Observation of context and learning from it is key to better understanding and improvement of test cases. “Familiarity breeds contempt” - Getting familiar with the internal workings and, and external behaviour goes a long way in significantly enhancing the test effectiveness. Test cycle Test cases Test cases (after) (before) 157