Understanding Key Concepts and Applications in Week 11: A Comprehensive Overview of Critical Topics, Practical Insights, and Real-World Examples for Effective Learning and Mastery of Essential Skills in the Course Curriculum."
Understanding Key Concepts and Applications in Week 11: A Comprehensive Overview of Critical Topics, Practical Insights, and Real-World Examples for Effective Learning and Mastery of Essential Skills in the Course Curriculum."
Similar to Understanding Key Concepts and Applications in Week 11: A Comprehensive Overview of Critical Topics, Practical Insights, and Real-World Examples for Effective Learning and Mastery of Essential Skills in the Course Curriculum." (20)
Understanding Key Concepts and Applications in Week 11: A Comprehensive Overview of Critical Topics, Practical Insights, and Real-World Examples for Effective Learning and Mastery of Essential Skills in the Course Curriculum."
2. Today in POP…
• Custom Software Development
• Softwares Chronic Crisis
• Software Testing
o Key terms: fault, failure, error
o Testing strategies
3. Custom Software Development
Gather as much information as possible about
the details and specifications of the desired
software from the client.
Refine Analysis Model with the goal of
creating a Design model. Design a software
structure that realises the specification.
Code the software
Test the software to verify that it operates as
per the specifications provided by the client
Requirements
Maintenance
Testing
Implementation
Design
6. Testing in life cycle models
• There are numerous software development
models
• A development model selected for a project,
depends on the aims and goals of that project
• Testing is always vitally important
• A stand-alone activity that forms a part of the
overall development model chosen for the
project
14. The first computer “bug”
• Grace Hopper (1906 – 1992)
• Working on the Mark II in 1947
• Log entry reads:
“moth trapped between the points of Relay #70,
in Panel F… First actual case of bug being
found”.
• Credited with coining the term
“debugging”.
15. Software’s Chronic Crisis
• Large software systems often:
o Fail to provide desired functionality
o Fall behind schedule
o Run over budget
o Cannot evolve to meet changing needs
• For every 6 large software projecs that become operational
2 are cancelled
• On average software projects overshoot their schedule by
half
• Three quarters of large systems do not provide required
functionality
16. Software Failures
• There is a very long list of failed software projects
and software failures
• These are extremely expensive and kill people!
• On average software projects overshoot their
schedule by half
• Three quarters of large systems do not provide
required functionality
17. Famous software horror stories
• Ariane 5 (1996)
o Exploded 40 seconds into flight – cost $500 million.
• Mars Climate Orbiter (1998)
o Software errors caused the probe to miss Mars. Cost $327
million.
• Boeing 737 Max (2018 – 2019)
o Software problems contributed to TWO crashes (Lion Air and
Ethopian). These cost 346 lives and cost Boeing at least $60
billion.
• London Ambulance Dispatching (1992)
o Cost cutting in testing
o Lack of stress testing
18. Software Testing
• Goal of testing
o Find faults in the software
o Demonstrate that there are no faults (for the test cases used
during testing)
• It is never possible to prove that there are no faults
• Testing should help locate errors, not just detect their
presence
o A yes/no answer to the question ”does the program work” is not
very helpful
• Testing should be repeatable
o Can be difficult for concurrent or distributed software
o Need to consider the effect of the environment an uninitialized
19. Faults, Errors and Failures
• Software Fault:
o A static defect in the software
o Equivalent to design mistakes in hardware
• Software Error:
o An incorrect internal state that is the manifestation of some
fault
o A fault may lead to an error (i.e. the error causes the fault to
become apparent)
• Software Failure:
o Unexpected or incorrect behaviour with respect to the
requirements or other specifications
20. A medical analogy
• A patient gives a doctor a list of symptoms
o Failures
• Software Error:
o An incorrect internal state that is the manifestation of some
fault
o A fault may lead to an error (i.e. the error causes the fault to
become apparent)
• Software Failure:
o Unexpected or incorrect behaviour with respect to the
requirements or other specifications
21. An Example of a fault
def numOfZero(array):
# initialise zero counter
zeroCount = 0
n = 1
while n < len(array):
if array[n] == 0:
zeroCount += 1
n += 1
return zeroCount
Fault: Should start
searching at 0, not 1
22. Test 1 (pass)
def numOfZero(array):
# initialise zero counter
zeroCount = 0
n = 1
while n < len(array):
if array[n] == 0:
zeroCount += 1
n += 1
return zeroCount
Fault: Should start
searching at 0, not 1
Test 1
[2, 7, 0]
Expected: 1
Actual: 1
Error: i is 1, not 0, on the first iteration
Failure: none
23. Test 2 (fail)
def numOfZero(array):
# initialise zero counter
zeroCount = 0
n = 1
while n < len(array):
if array[n] == 0:
zeroCount += 1
n += 1
return zeroCount
Fault: Should start
searching at 0, not 1
Test 2
[0, 2, 7]
Expected: 1
Actual: 0
Error: i is 1 not 0
Error propagates to the variable count
Failure: count is 0 at the return
statement
24. Testing-partial verification
• Verification locates problems it doesn’t explain them!
• Testing checks only the values we select
• Even small systems have millions (of millions) of possible tests
• The number of test cases increases exponentially with the number
of input/output variables
• Testing software is hard and can never be complete
25. Drilling into your code
• Once you have found a problem you need to understand it
• You need to consider the state of the programme as it runs
• Contents of variables, inputs and outputs
• You can do this on paper (desk tracing)
• You can add print statements to write contents of variables
to the console
• You can use a debugger
numOfZero.py
26. Testing Strategies
• Offline (static)
1. Syntax checking and “lint” testers
2. Walkthroughs (“dry runs”)
3. Inspections
• Online (live)
1. Black box testing
2. White box testing
27. Syntax checking
• Detecting errors before the program is run is
preferable to having them occur in a running program
• Syntax checking will determine whether a program
“looks” acceptable
• “Lint” programs do deeper tests on code – for
example:
o Detecting lines that will never be executed
o Detecting variables that have not ben intialised
• Compilers do a lot of this as “warnings”
28. Inspections
• A team of programmers read the code and
consider what it does
• The inspectors play “devils advocate”, trying to
break it!
• Very time consuming (therefore expensive)
• Often only used for critical code
29. Walkthroughs
• Similar to inspections, but the inspectors
“execute” the code using simple test data
• Effectively formalised desk tracing
• Expensive and time consuming
• Not always possible – especially for large systems
• Inspections / walkthroughs will typically find 30-
70% of errors
30. Black Box Testing
• Generate test cases from the specification
o i.e. don’t look at the code
• Advantages
o Avoids making the same assumptions as the
programmers
o Test data is independent of the implementation
o Results can be interpreted without knowing
implementation details
31. Consider this method
def largestElement(array):
largest = array[0]
for n in array:
if n > largest:
largest = n
return largest
largestElement.py
33. Choosing your test set
• Test sets should be chosen using a knowledge of the data that
is most likely to cause problems
• Equivalence partitioning
• Boundaries
• Off-nominal (extremes)
34. Equivalence Partitioning
• Suppose the system asks for “a number between 100 and
999”
• This gives three equivalence classes of input:
o Less than 100
o 100 to 999
o Greater than 999
• We thus test characteristic values from each equivalence class
• Example: 50 (invalid), 500 (valid), 1500 (invalid)
35. Boundary Analysis
• Arises from the observation that most programs fail at input
boundaries
• Suppose the system asks for “a number between 100 and 999”
• The boundaries are 100 and 999
• We therefore test for values:
99 100 101 998 999 1000
Lower boundary Upper boundary
36. Off-nominal testing
• Extreme data
o Largest possible number
o Smallest possible number
o Negative numbers
o Zero
o Large strings
o Empty strings
37. White (clear) box testing
• Use a knowledge of the program structure to
guide the development of tests
• Aim to test every statement at least once
• Test all paths through the code
• A test is path complete if each possible path
through the code is exercised at least once by the
test case
38. Simple white box example
• There are two possible paths through this code
• signal > 5 and signal <= 5
• Both should be executed by the test set
if signal>5:
print ("Hello")
else:
print("Goodbye")
39. Overall Goal
• Establish confidence that the software is
fit for purpose
• This does NOT mean completely free of
defects
• It means good enough for intended use,
and the type of use will determine the
degree of confidence that is needed
40. Tips for debugging coursework
• Learn how to use a debugger and get into the
habit of routinely using it
• Test it by predicting behaviour for a test set of
data
• When you find unexpected behaviour (a bug) try
to repeat it
• Think like a detective!