3. Information
Study, Design, Production: Jean-Antoine Moreau.
Intellectual property: Jean-Antoine Moreau.
Subject to copyright.
My copyrights are managed by ADAGP in France.
4. Introduction
The tests and their sequencing are imperative before
any delivery of the software to the users: the
customers.
5. Sequencing
Test sequencing integrates phases of the engineering process
for the design and production of the software.
This sequencing is therefore thought out and defined just
after validation of the specifications.
This sequencing is therefore structured from the start of the
software design phase.
7. "At the root of every computer error you will find at
least two human errors, including the error of blaming
the computer."
Tom Gibb
8. “Debugging is twice as hard as writing code, so if you
write code as smart as possible, you are, by definition,
not smart enough to debug it.”
Brian W. Kernighan.
9. "A good scientist is a person with original ideas. A good
engineer is a person who comes up with a design that
works with as few original ideas as possible. There is no
prima donna in engineering."
Freeman Dyson
12. 01
02
03
04
05
06
Introduction
Problem Statement
Market Analysis
Competitive Landscape
Implementation Plans
Team
The testing phases of software using or
integrating Artificial Intelligence.
❏ Unit test,
❏ Integration test,
❏ Validation test,
❏ System testing,
❏ Non-regression test,
❏ Health test (no bugs, no dead code, any variable defined and initialized etc.)
❏ Smoke test (stability test of a version),
❏ Functional user acceptance test,
❏ Performance test,
❏ Security test,
❏ Compatibility test,
❏ Ergonomics test,
❏ Alpha and Beta testing,
❏ Usability testing.
13. Unit test
Test each component of the software independently to
ensure it is working properly.
Testing an isolated
unit of code.
Verification of
expected behavior.
Test automation.
Help with code maintenance.
Early bug detection.
Make redesigns easier.
Implicit documentation:
Unit tests can serve as documentation of the expected behavior of
functions.
14. Integration Test
Check the interaction between different modules or
components to ensure their consistency.
Checking the interaction
between modules.
Testing interfaces
between
components.
Error and exception
handling.
Validation of data flows.
Performance tests.
Specific integration
scenarios.
➔ Compatibility tests
15. Interface test
Testing the interfaces between the Artificial Intelligence
model used and the software and the corporate
information system.
16. System Test -
System Testing
Test the system as a whole to ensure that all
functionality is implemented correctly.
Ensure that the integrated functionality meets the
requirements of the entire system, particularly in terms
of performance, interoperability, and behavior under
various conditions.
17. Non-regression test
Verify that changes to software have not introduced
new bugs or regressions in functionality.
This involves testing old features after every update or
feature addition.
Hence the strategic importance of test automation.
18. Smoke test
Performed during the initial build of the software.
Ensures that all critical program features are resolved
and that the programs run efficiently.
Smoke tests can be performed manually or
automatically.
These tests are a subset of acceptance tests, and the
main objective of these tests is to validate the stability
of the new version so that it can be subjected to more
rigorous testing.
20. Software Health Test
Sanity testing doesn't focus on core functionality, but
rather on verifying the software's rationality and
correctness.
The main objective of sanity testing is to ensure that
there are no bugs or false results in the component
processes.
21. functional test
Functional testing aims to verify the application's
functionality against a set of requirements or
specifications.
Functional testing often includes testing portions of the
underlying code.
22. User Acceptance
Testing
This is the user recipe.
Check that the software meets the needs and
expectations of the financial users.
This is usually done by the users or by the customer
representatives.
23. If you integrate an artificial intelligence
chatbot:
➢ Check that it meets customer needs.
➢ Check that it does not scare away
your customers.
24. “It is not the employer who pays the
wages, but the customer.”
Henry Ford
25. Performance test
Evaluate the performance of the software, under
different loads.
They evaluate the responsiveness, stability and
scalability of a computer application.
Load Testing - simultaneous users, simultaneous
transactions.
Stress Testing - the software system, the software
application is pushed to the extreme (number of users,
number of transactions, network flows, data access etc.)
Volume Performance Testing.
Scalability Testing - Ability of the system to evolve -
example increase in resources such as servers etc.
26. Performance test
Stability/Endurance Testing - The ability of the system
to operate stably for an extended period of time.
Checking whether performance remains consistent over
several hours or days, and whether problems such as
memory leaks appear.
Response Time Testing
Configuration Testing - examines the performance of
the software on different hardware and software
configurations. It ensures that the software runs
efficiently on various system configurations.
27. Load and throughput
testing
Test the software's ability to handle a gradual
increase in load (user, transaction, information
flow).
28. Security testing
Testing to identify software vulnerabilities and
security holes, including testing to verify data
access management, sensitive data management,
user access.
29. Security testing
● Penetration Testing (Penetration Testing or
Pen Test),
● Vulnerability Analysis,
● Authentication and Authorization Testing,
● Session Management Testing,
● Injection Attack Resistance Testing,
● Encryption Testing,
● Denial of Service (DoS) Attack Resistance
Testing,
● Configuration and Error Handling Testing,
● API Security Testing,
● Source Code Security Review,
● User Interface (UI) Testing.
30. Ergonomics test
Ergonomics test or user interface test.
● Check the user interface to ensure that it is
user-friendly and intuitive. This includes
accessibility and navigation tests.
● In the case of multi-screens, avoid the user
having to take bad positions that are harmful
to the position of their spine, pelvis, hips and
knees.
● Also, the user must be reminded that breaks
of a few minutes are necessary every hour,
with mobility during these breaks: going to
see colleagues, doing a few stretches, going to
get a glass of water, etc.
31. Compatibility test
Check that the software works properly on different
environments:
● Browser,
● Operating system,
● Database management system,
● Network protocol,
● ERP,
● Cloud connection tool,
● Etc
This with different hardware configurations.
32. Alpha and Beta
Testing
Alpha Testing: Conducted by developers or an
internal team before the official launch of the
software product.
Beta Testing: Conducted by a select group of
external users, the Beta Testers, to obtain
feedback before the final release.
33. Now let's come
to the tests
related to the
use of Artificial
Intelligence by
software.
34. We need to check
● Accuracy,
● Truthfulnes
s,
● Robustness,
From the model,
Artificial Intelligence,
used.
35. We need to check
That these qualities persist when the
artificial intelligence model is integrated
into the software or called by the software.
36. Model accuracy
tests:
Performance evaluation.
Verify that the AI
model achieves a sufficient level of
accuracy for the application. This includes evaluating
performance using metrics such as precision, recall, F1
score, ROC curve, etc., depending on the type of
problem (classification, regression, etc.).
Verify expected results: Ensure that the results provided
by the AI
are those expected in specific cases and that
they meet the standards of the application domain.
bla
37. Model robustness
tests:
Adversarial Testing
Ensure that the model can withstand disrupted or
malicious inputs (adversarial examples) that could
impair its performance.
Then comes the Generalization Test: Verify that the
model can generalize correctly to previously unseen
data and that it does not suffer from overfitting.
bla
38. Bias and Fairness
Tests
Checking for bias in data
Test the training data for discriminatory biases (e.g.,
racial, gender, economic) that could be reflected in the
model's decisions. Also, test for conflicting or
inconsistent data.
Then comes Fairness Testing: Ensuring that the AI
makes decisions fairly and does not introduce injustice
or discrimination, particularly in contexts where human
impacts are significant (such as recruitment, criminal
justice, etc.).
bla
39. Transparency
and
explainability tests
Explainability of results
For models like deep neural networks, which are often
perceived as "black boxes," test whether the results can
be explained in a way that a user can understand. Tests
should verify whether the AI
provides clear
justifications for the decisions or predictions it makes.
Then comes the Decision Traceability Test: Check
whether the model's decisions can be transparently
traced and justified for audits or compliance purposes.
bla
40. Performance
testing
Response time test
Measure how quickly the model processes inputs,
particularly when deployed in real time or in an
environment with strict latency requirements.
Then comes the Scalability Test: Test whether the
system can handle an increase in data volume or a
heavier load without degrading performance.
bla
41. Robustness tests and missing
data management
Missing or incorrect value handling: Test how the
model reacts to missing, incomplete, or incorrect
inputs. The software must be able to correctly
handle or report these situations.
Then comes the data sensitivity test: Verify the
model's ability to adapt to variations in the input
data. For example, what happens if new categories
or types of data are introduced?
Model update tests
Adaptability Check: When new data becomes
available, check whether the model can be re-tuned
or re-trained without compromising performance.
Then comes Model Degradation Testing: Check that
updates do not result in significant performance loss
or regression compared to previous versions of the
model.
42. Security testing
Attack vulnerability testing: Test the model's
resistance to malicious attacks, such as data
poisoning or adversarial attacks, which aim to
deceive the model.
Data confidentiality testing: Ensure that the model
does not disclose sensitive or confidential data and
complies with confidentiality principles, especially
with regard to personal or sensitive data.
Integration tests with the
environment
Integration testing: Verify that the AI
works
correctly when integrated with other systems and
software, and that there are no conflicts between
components.
User interface testing: If the AI
software interacts
with users, the usability of the interface must be
tested and ensure that users can easily understand
and interact with the results provided by the AI.
43. Compliance and regulatory
testing
Compliance Verification: Ensure that the AI
system
complies with the standards and laws applicable to
the use of artificial intelligence in its field.
Traceability and Auditability Testing: Ensure that the
system can generate activity logs and detailed
reports to facilitate compliance with regulatory
requirements.
Fatigue and user acceptance
testing
Acceptability testing: Verify that the AI
system
meets end-user expectations and that they are
satisfied with the AI's results, performance, and
decision-making in a practical context.
Long-term testing: Ensure the model remains
performant over time, with tests conducted over an
extended period to observe changes in its
performance and detect potential deviations.
44. Deployment and
version
management
testing
Production deployment testing
Verify that the AI
model functions correctly in
the production environment, is stable, and
integrates well with other systems.
Version management testing: Ensure that model
updates do not create regressions or
inconsistencies in predictions.
These tests are crucial to ensure that AI-powered
software is reliable, performant, and compliant
with security, ethical, and regulatory
requirements.
45. In conclusion
“Quality is free. It’s not a gift, but it’s free. What costs money
are the unquality things-all the actions that involve not doing
jobs right the first time.”
Philip B. Crosby - Expert in Quality Management.