2. Feasibility Study
•A feasibility study in software engineering assesses a project's viability
before initiation. It involves data collection and evaluation based on
time, budget, and other constraints. If deemed feasible, the team
plans execution by documenting each phase using techniques like
DFD modeling and dynamic flow mapping.
3. Requirement Engineering
•A systematic and strict approach to the definition, creation, and
verification of requirements for a software system is known as
requirements engineering. To guarantee the effective creation
of a software product, the requirements engineering process
entails several tasks that help in understanding, recording, and
managing the demands of stakeholders.
4. ◻ Requirements engineering is the systematic use of-
🞑 Proven principles.
🞑 Techniques, and language tools.
🞑 Cost-effective analysis.
🞑 Documentation.
🞑 On-going evaluation of the user‟s needs.
🞑 Specifications of the external behavior of a system to satisfy those user
needs.
7. 1. Feasibility Study
•A feasibility study in software engineering assesses a project's viability
before initiation. It involves data collection and evaluation based on
time, budget, and other constraints.
8. Requirements elicitation
•Requirements elicitation is the process of gathering
information from stakeholders to understand their needs
and expectations. This process involves various techniques,
including interviews, questionnaires, workshops, and
observations. The goal is to collect as much relevant
information as possible to define the requirements
accurately.
9. • Interviews
• Interviews are one-on-one or group discussions with stakeholders to gather their needs
and expectations. These discussions can be structured, semi-structured, or unstructured,
depending on the context and the stakeholders involved. Interviews provide in-depth
insights and help build a rapport with stakeholders.
• Questionnaires
• Questionnaires are written sets of questions distributed to stakeholders to gather their
requirements. They are useful for collecting information from a large number of
stakeholders quickly. Questionnaires can be open-ended or closed-ended, depending on
the type of information needed.
• Workshops
• Workshops are collaborative sessions where stakeholders and requirements engineers
work together to identify and document requirements. Workshops foster active
participation and help in reaching a consensus on the requirements. They are particularly
useful for complex projects with multiple stakeholders.
• Observations
• Observations involve watching stakeholders perform their tasks to understand their needs
and challenges. This technique is useful for gathering requirements that stakeholders may
not be able to articulate clearly. Observations provide a real-world context for the
requirements.
10. Requirements Specification
• This activity is used to produce formal software requirement models.
All the requirements including the functional as well as the
non-functional requirements and the constraints are specified by
these models in totality.
• During specification, more knowledge about the problem may be
required which can again trigger the elicitation process.
• The models used at this stage include ER diagrams, data flow
diagrams(DFDs), function decomposition diagrams(FDDs), data
dictionaries, etc.
•
11. Requirements Verification and Validation
• Verification: It refers to the set of tasks that ensures that the software
correctly implements a specific function.
• Validation: It refers to a different set of tasks that ensures that the
software that has been built is traceable to customer requirements. If
requirements are not validated, errors in the requirement definitions would
propagate to the successive stages resulting in a lot of modification and
rework. The main steps for this process include:
1. The requirements should be consistent with all the other requirements i.e.
no two requirements should conflict with each other.
2. The requirements should be complete in every sense.
3. The requirements should be practically achievable.
12. Requirements Management
•Requirements management is the process of maintaining
and controlling the requirements throughout the project
lifecycle. It involves tracking changes to requirements,
managing dependencies, and ensuring that the
requirements are implemented as specified.
•
13. Feasibility Study
•A feasibility study in software engineering assesses a project's viability
before initiation. It involves data collection and evaluation based on
time, budget, and other constraints. If deemed feasible, the team
plans execution by documenting each phase using techniques like
DFD modeling and dynamic flow mapping.
15. • 1. Technical feasibility
Technical feasibility assesses the availability of hardware, software, and technology
needed for project development. It evaluates existing resources, the technical
team's capabilities, the suitability of current technology, and its ease of
maintenance and upgrades.
• Analyze the technical skills and capabilities of software development
team members.
• Determine if the relevant technology is stable and established.
• Assess that the technologies chosen for software development will
have many users so that they can be consulted if they encounter
problems or need improvement.
16. • Operational Feasibility
•Operational feasibility analyzes the level of service delivery
according to requirements and the ease of operating and
maintaining the product after deployment.
•Determine if the expected issue in the user request is a high
priority.
•Determine if the organization is satisfied with alternative
solutions proposed by the software development team.
•Determine if the solution proposed by the software
development team is acceptable.
•Analyze whether users are comfortable with new software.
17. •Economic feasibility
Project costs and benefits are analyzed in a profitability
study. This means that as part of this feasibility study, a
detailed analysis of the costs of the development project will
be made. This includes all costs necessary for the final
development, such as hardware and software resources
required, design and development costs, operating costs,
etc.
18. •Legal Feasibility
•In a legal feasibility study, the project is analyzed from the
view of legality. This includes analysis of obstacles in the
legal implementation of the project, data protection or social
media laws, project certificates, licenses, copyrights, etc.
Overall, a legal feasibility study is a study to determine
whether a proposed project meets legal and ethical
requirements.
19. •Schedule Feasibility
•
A schedule feasibility study mainly analyzes the proposed
project deadlines/deadlines, including the time it will take
the team to complete the final project. This has a significant
impact on the organization as the projectʼs purpose may fail
if it is not completed on time.
22. DFD (Data Flow Diagram)
•Process modeling involves graphically representing the processes or
actions, that capture, manipulate, store and distribute data between a
system and its environment and among component within the
system. A common form of a process model is a data-flow diagram
(DFD).
23. Symbols and Notations Used in DFDs
• Two common systems of symbols are named after their creators:–
•Yourdon and Coad
•Yourdon and DeMarco
•Gane and Sarson
24. Components of Data Flow Diagram
•The components of Data Flow Diagram are always the same but there
are different diagrammatic notations used. The notation used here is
one adopted by a methodology known assystems analysis and design
methods there are four different symbols that are normally used on a
structured DFD. The elements represented are :
•External entities
•Processes
•Data stores
•Data Flows
25. External entities
These are the starting and ending points for the data flow in a
DFD. External entities are placed on the edges of a DFD to
represent the input and output of information to the entire
system or process.
An external entity could be a person, organization or system. For
example, a customer could be an external entity in a DFD that
models the process of making a purchase and receiving a sales
receipt. External entities are also known as terminators, actors,
sources and sinks.
26. Process
Processes are activities that change or transform data. These
activities could include computation, sorting, validation,
redirection or any other transformation required to advance that
segment of the data flow. For example, a credit card payment
verification would be a process that occurs within a customer's
purchase DFD.
27. Data stores
These are the locations in a DFD where data is stored for later
use. Data stores could represent databases, documents, files or
any repository for data storage. For example, data stores in a
product fulfillment DFD might include a customer address
database, product inventory database and a delivery schedule
spreadsheet.
28. Data flows
Data flows are the routes that information takes as it travels
between external entities, processes and data stores. For
example, in an e-commerce DFD, the route that connects a
user entering login credentials with an authentication gateway
would be a data flow.
30. Why we use DFD
• Gain clarity: A visual representation with simple symbols and labels
provides a clearer understanding of complex systems than paragraphs of
descriptive text.
• Analyze systems: DFDs show the relationships and interactions between
the components of a system or process for easier analysis.
• Identify problems: DFDs can make it easier to isolate system design
problems such as bottlenecks, inconsistencies, redundancies and others.
• Improve processes: DFDs help analysts visualize new ways to optimize
data flows to accelerate and improve business processes.
31. Notation Process Symbol Data Store Symbol External Entity Primary Usage
Yourdon &
DeMarco
Circle/Oval Open-ended
rectangle
Square General structured
analysis
Yourdon & Coad Circle/Oval Open-ended
rectangle
Rectangle Object-oriented
analysis
Gane & Sarson Rounded rectangle Rectangle with
double vertical
lines
Square Business systems
modeling
SSADM Rounded rectangle Open-ended
rectangle
Square Government/public
sector systems
32. Physical DFD Logical DFD
Data flow names include the implementation facts as
names, numbers, media, timing etc.
Data flow names describe the data they contain. They
do not refer to the form or document on which they
reside.
Process names include the name of the processor i.e.
person, department, computer processor i.e. person,
department, computer
Process names describe the work done without
referring to e.g. Account Receivable, Order processing
etc.
Data Stores identify their computer and manual
implementation.
Physical location of data stores is irrelevant. Many
times, the same data store may be shared by
subsystems and processes.
This is more realistic and implementation oriented.
The PDFD are more detailed in nature.
As the name suggests, this is more logical in format.
This is more abstract than PDFD and less dependent
on implementation steps.
33. DFD Characteristics are :
•DFD can be used to model physical logical, current or new systems.
• DFD does not represent procedural or time-related processes.
•Revisions of the same DFD are done to improve model based on
understanding.
•Decision to stop iterative decomposition may be difficult.
34. DFD Rules
1. Each process has at least one outgoing data flow and at least one
ingoing data flow.
2. Each process can go to any other symbol (other processes, data
store, and entities).
35. 3. Each data store should have at least one incoming and at least one
outgoing data flow.
4. Entities must be connected to a process by a data flow.
36. 5. Data flows cannot cross with each other.
6. Data stores cannot be connected to external entities. Otherwise, it
means you’re allowing an external entity access to your data files and
stores.
7. The labels of processes can be verb phrases. Data stores are
displayed by nouns.
37. Decomposition
•Each bubble in the DFD represents a function performed by the
system. The bubbles are decomposed into subfunctions at the
successive levels of the DFD model. Decomposition of a bubble is also
known as factoring o r exploding a bubble. Each bubble at any level of
DFD is usually decomposed to anything three to seven bubbles. A few
bubbles at any level make that level superfluous.
38. Levels of DFD
• It is very difficult to explain all the processes in just one DFD, that
is why DFDs are expressed in a set of DFD levels. The first step in
creating DFDs is to identify the DFD elements (external entities,
processes, data stores, and data flows) explained in the section
above.
• Context Diagrams – The Context Diagram is the highest-level DFD,
providing a simple overview of the entire system. It consists of a single
process that represents the whole system, showing how it interacts with
external entities (such as users, other systems, or databases). The main goal
of Level 0 is to define the system boundaries and establish the data
exchanges between the system and its external environment without going
into internal details.
40. level 1
•Level 1 DFD expands on the Context Diagram by breaking down the
single process into multiple sub-processes. These sub-processes show
how data flows between different parts of the system, detailing the
major functional components. It also introduces data stores, which
represent where data is temporarily or permanently stored. Level 1
provides a more structured view of how the system operates
internally while still maintaining a high-level understanding.
42. 2-Level provides an even more detailed view of the system by breaking
down the sub-processes identified in the level 1 Data Flow Diagram
(DFD) into further sub-processes. Each sub-process is depicted as a
separate process on the level 2 DFD. The data flows and data stores
associated with each sub-process are also shown.
2-Level Data Flow Diagram (DFD) goes one step deeper into parts of
1-level DFD. It can be used to plan or record the specific/necessary
detail about the system’s functioning.
47. Data Dictionary
•Data Dictionary is the major component in the structured
analysis model of the system. It lists all the data items
appearing in DFD.
•The data dictionary is a centralized repository of information
about data. It provides a detailed description of the data,
including its meaning, relationship to other data, usage, and
format.
•
48. 1.These are important in database management, data modeling, and
software development.
2.It helps to ensure consistency, clarity, and efficient data management.
3.It helps to ensure that everyone in the organization understands the
data, how it should be used, and how it should be managed.
4.It is essential for maintaining data quality and facilitating effective
data governance.
49. Data Dictionaries are simply repositories to store information
about all data items defines in DFD.
At the requirement stage, the data dictionary should at least
define customer data items, to ensure that the customer and
developer use, the same definitions and terminologies.
Typical information stored include –
• Name of the data item
• Aliases
• Description/Purpose
• Related data items
• Range of Values
• Data Structure Definition
49
Data Dictionaries
50. The data dictionary has two different kinds of
items: composite data and elemental data.
• Higher-level (composite) elements may be defined in terms of
lower-level items.
• Elemental data are items that cannot be reduced any further
and are defined in terms of the values that it can assume or
some other unambiguous base type.
The general format of a data dictionary is similar to a standard
dictionary; it contains an alphabetically ordered list of entries.
Each entry consists of a formal definition and verbal description
50
Data Dictionaries
51. Composite data can be constructed in three ways: sequence, repetition, or
selection of other data types.
sequence:
+ A plus sign indicates one element is followed by or concatenated with
another element.
repetition:
[ ] Square brackets are used to enclose one or more optional elements.
| A vertical line stands for "or" and is used to indicate alternatives.
selection:
{} Curly braces indicate that the element being defined is made up of a series
of repetitions of the element(s) enclosed in the brackets.
{ }y The upper limit on the number of allowable repetitions can be indicated by
including an integer superscript on the right bracket. Here y represents an
integer and indicates that the number of repetitions cannot be greater than y.
51
COMPOSITE DATA
52. sequence:
Mailing-address = name + address + city + zip-code
* The address at which the customer can receive mail *
repetition:
Completed-order = [ item-ordered ]
* A complete customer order *
selection:
Atm-transaction = { deposit | withdrawal }
* An customer transaction at an automated teller machine *
In these examples asterisks are used to delimit the comment or verbal
description of the item, but other notations can be used as well. 52
Examples of COMPOSITE DATA
53. Elemental data is described in terms of a data type or by listing the values that the
item can assume.
desired-temperature = floating point
*Desired-temperature is the value the user sets for desired water temperature
in the pool. It is a floating point number in the range from 0.0 to 100.0, inclusive.
The units are Celsius.*
age = non-negative integer
*Age is how old the customer is in years. Age is a whole number greater than
or equal to zero.*
performances-attended = counter
* Performances-attended is the number of performances this customer has
attended.*
counter = positive integer
*Counter is a whole number greater than zero that can only be incremented by
one and never decremented.*
53
Examples of ELEMENTAL DATA
54. Notations Meaning
x=a+b x consists of data elements a & b
x=[a|b] x consists of either data element a or b
x=a x consists of data element a
x=y{a} x consists of y or more occurrences of data element a
x = {a}z x consists of z or fewer occurrences of data element a
x=y{a}z x consists of some occurrences of data element a which are
between y and z.
54
Mathematical Operator used within DD
56. • An Entity – Relationship model (ER model) is an abstract
way to describe a database.
• It is a visual representation of different data using
conventions that describe how these data are related to
each other.
56
Introduction
57. There are three basic elements in ER models:
Entities are the “things” about which we seek
information.
Attributes are the data we collect about the entities.
Relationships provide the structure needed to draw
information from multiple entities.
57
Basic elements in ER model
59. Entity
Entities are represented by means of rectangles. Rectangles are named with the
entity set they represent.
[Image: Entities in a school database]
Attributes
Attributes are properties of entities. Attributes are represented by means of
eclipses. Every eclipse represents one attribute and is directly connected to its
entity (rectangle).
59
ER Model
60. Relationship
A relationship describes how entities interact. For example, the entity
“carpenter” may be related to the entity “table” by the relationship “builds” or
“makes”. Relationships are represented by diamond shapes.
60
ER Model
61. 61
TYPES OF ATTRIBUTES
There are total six types of attributes :-
1. Simple attribute
2. Composite attribute
3. Derived attribute
4. Stored attribute
5. Single valued attribute
6. Multi valued attribute
62. 62
TYPES OF ATTRIBUTES
•Simple attribute: Cannot be split in to further attributes(indivisible). Also
known as Atomic attribute. Ex: Ssn(Social Security Number) or Roll No
63. 63
TYPES OF ATTRIBUTES
•Composite attribute:
🞑Can be divided in to smaller subparts which represent more basic attributes
with independent meaning
🞑Even form hierarchy
Ex: Address, Name
64. 64
TYPES OF ATTRIBUTES
•Derived attribute:
🞑Attribute values are derived from another attribute.
🞑Denoted by dotted oval
Ex - Age
65. 65
TYPES OF ATTRIBUTES
Stored attribute:
Attributes from which the values of other attributes are derived.
For example ‘Date of birth’ of a person is a stored attribute.
66. 66
TYPES OF ATTRIBUTES
Single-valued attribute:
🞑A single valued attribute can have only a single value.
🞑For example a person can have only one ‘date of birth’, ‘age’ ,
Social_Security_Number.etc.
🞑That is a single valued attributes can have only single value. But it
can be simple or composite attribute.
🞑That is ‘date of birth’ is a composite attribute , ‘age’ is a simple
attribute. But both are single valued attributes.
67. 67
TYPES OF ATTRIBUTES
Multi-value attribute:
Attribute may contain more than one values. Denoted by double circled oval
line connecting to the entity in the ER diagram.
Ex: Phone-number, College-degree, email addresses etc
68. 68
KEYS
Keys are of following types:-
1. Candidate Key
2. Composite Key
3. Primary key
4. Foreign Key
The name of each primary key attribute is
underlined.
69. 69
CANDIDATE KEY
• a simple or composite key that is unique and minimal
•unique – no two rows in a table may have the same value at
any time
•minimal – every column must be necessary for uniqueness
•For example, for the entity
Employee(EID, First Name, Last Name, SIN, Address, Phone,
BirthDate, Salary, DepartmentID)
•Possible candidate keys are EID, SIN
70. 70
COMPOSITE KEY
•Composed of more than one attribute
•For example, First Name and Last Name – assuming there is
no one else in the company with the same name, Last Name
and Department ID – assuming two people with the same last
name don’t work in the same department.
•A composite key can have two or more attributes, but it must
be minimal.
71. 71
PRIMARY KEY
•A candidate key is selected by the designer to uniquely
identify tuples in a table. It must not be null.
•A key is chosen by the database designer to be used as an
identifying mechanism for the whole entity set. This is
referred to as the primary key. This key is indicated by
underlining the attribute in the ER model.
•For example Employee(EID, First Name, Last Name, SIN,
Address, Phone, BirthDate, Salary, DepartmentID) – EID is
the Primary key.
72. 72
FOREIGN KEY
•An attribute in one table that references the primary key of
another table OR it can be null.
•Both foreign and primary keys must be of the same data type
•For example: Employee(EID, First Name, Last Name, SIN,
Address, Phone, BirthDate, Salary, DepartmentID) –
DepartmentID is the Foreign key.
74. If there are two entity types involved it is a binary relationship type
If there are three entity types involved it is a ternary relationship type
It is possible to have a n-array relationship (quaternary)
74
DEGREE OF A RELATIONSHIP
It is the number of entity types that participate in a relationship
SALESASSIST PRODUCT
SELLS
CUSTOMER
75. If we have two entity types A and B, the cardinality constraint
specifies the number of instances of entity B that can (or must) be
associated with entity A.
Four possible categories are
One to one (1:1) relationship
One to many (1:m) relationship
Many to one (m:1) relationship
Many to many (m:n) relationship
75
CARDINALITY CONSTRAINTS
The number of instances of one entity that can or must be
associated with each instance of another entity.
80. PARITICIPATION CONSTRAINTS (OPTIONALITY)
• Specifies minimum number of relationship instances each entity
can participate in .
• This is called minimum cardinality constraint.
• Two type of the participation are : Total And Partial
Specifies if existence of an entity depends on it being related to another entity
via relationship.
PARTICIPATION
TOTAL
PARTIAL
81. • Ex: if company policy says that every employee must work for the
department then participation of employee in work-for is total.
• Total participation is also called existence dependencies.
• Every entity in total set of employee must be related to a department via
WORKS-FOR
• But we can’t say that every employee must MANAGE a department
• Hence relationship is partial.
• Total participation is indicated by double line and partial participation by
single line.
EMPLOYEE DEPARTMENT
WORKS-FO
R
1
N
82. GENERALIZATION
The process of generalizing entities, where the generalized entities contain the
properties of all the generalized entities is called Generalization. In
generalization, a number of entities are brought together into one generalized
entity based on their similar characteristics. For an example, pigeon, house
sparrow, crow and dove all can be generalized as Birds.
83. SPECIALIZATION
Specialization is a process, which is opposite to generalization, as mentioned above. In
specialization, a group of entities is divided into sub-groups based on their
characteristics. Take a group Person for example. A person has name, date of birth,
gender etc. These properties are common in all persons, human beings. But in a
company, a person can be identified as employee, employer, customer or vendor
based on what role do they play in company
84. INHERITANCE
One of the important features of Generalization and Specialization, is inheritance, that
is, the attributes of higher-level entities are inherited by the lower level entities.
For example, attributes of a person like name, age, and gender can
be inherited by lower level entities like student and teacher etc.
85. Benefits of ER diagrams
ER diagrams constitute a very useful framework for creating and
manipulating databases.
First, ER diagrams are easy to understand and do not require a person to
undergo extensive training to be able to work with it efficiently and
accurately. This means that designers can use ER diagrams to easily
communicate with developers, customers, and end users, regardless of their
IT proficiency.
Second, ER diagrams are readily translatable into relational tables which can
be used to quickly build databases. In addition, ER diagrams can directly be
used by database developers as the blueprint for implementing data in
specific software applications.
Lastly, ER diagrams may be applied in other contexts such as describing the
different relationships and operations within an organization.
86. More techniques for data analysis
There are two main techniques available to
analyze and represent complex processing
logic:
1. Decision trees and
2. Decision tables.
87. DECISION TREES
A decision tree gives a graphic view of the
processing logic involved in decision
making and the corresponding actions
taken.
The edges of a decision tree represent
conditions and the leaf nodes represent
the actions to be performed depending on
the outcome of testing the condition.
88. EXAMPLE
Consider Library Membership Automation Software (LMS) where it should
support the following three options:
1. New member
2. Renewal
3. Cancel membership
New member option
Decision: When the 'new member' option is selected, the software asks
details about the member like member's name, address, phone number etc.
Action: If proper information is entered, then a membership record for the
member is created and a bill is printed for the annual membership charge plus
the security deposit payable.
89. Example Contd..
Renewal option
Decision: If the 'renewal' option is chosen, the LMS asks for the member's
name and his membership number to check whether he is a valid member or
not.
Action: If the membership is valid then membership expiry date is updated
and the annual membership bill is printed, otherwise an error message is
displayed.
Cancel membership option
Decision: If the 'cancel membership' option is selected, then the software
asks for member's name and his membership number.
Action: The membership is cancelled, a cheque for the balance amount due
to the member is printed and finally the membership record is deleted from
the database.
92. Decision tables are used in various engineering fields to represent
complex logical relationships. This testing is a very effective tool in
testing the software and its requirements management.
The output may be dependent on many input conditions and decision
tables give a tabular view of various combinations of input
conditions and these conditions are in the form of True(T) and
False(F).
Also, it provides a set of conditions and its corresponding actions
required in the testing.
93. In software testing, the decision table has 4 parts which are divided into portions and are
given below :
Condition Stubs : The conditions are listed in this first upper left part of the decision table that is used
to determine a particular action or set of actions.
Action Stubs : All the possible actions are given in the first lower left portion (i.e, below condition
stub) of the decision table.
Condition Entries : In the condition entry, the values are inputted in the upper right portion of the
decision table. In the condition entries part of the table, there are multiple rows and columns which
are known as Rule.
Action Entries : In the action entry, every entry has some associated action or set of actions in the
lower right portion of the decision table and these values are called outputs.
94. DECISION TABLES
Decision tables are used mainly because of their visibility,
clearness, coverage capabilities, low maintenance and
automation fitness.
STRUCTURE
The four quadrants
Conditions Condition alternatives
Actions Action entries
96. EXAMPLE
Suppose a technical support company
writes a decision table to diagnose printer
problems based upon symptoms
described to them over the phone from
their clients.
98. BENEFITS
🞑Decision tables, especially when coupled with the use of
a domain-specific language, allow developers and policy experts
to work from the same information, the decision tables
themselves.
🞑Tools to render nested if statements from traditional
programming languages into decision tables can also be used as
a debugging tool.
🞑Decision tables have proven to be easier to understand and
review than code, and have been used extensively and
successfully to produce specifications for complex systems.
100. REQUIREMENTS DOCUMENTATION
This is the way of representing requirements in a consistent
format.
It is called Software Requirement Specification(SRS).
SRS serves many purpose depending upon who is writing it.
🞑 written by customer
🞑 written by developer
Serves as contract between customer & developer.
101. Nature of the SRS
The basic issues that the SRS writer(s) shall address are the
following:
a) Functionality
b) External Interfaces
c) Performance
d) Attributes
e) Design Constraints imposed on an implementation.
SRS Should
-- Correctly define all requirements
-- not describe any design details
-- not impose any additional constraints
102. Characteristics of a good SRS
SRS should be
1. Correct
2. Unambiguous
3. Complete
4. Consistent
5. Ranked for importance and/or stability
6. Verifiable
7. Modifiable
8. Traceable
103. 1. Complete: The SRS should include all the requirements for the software system, including both functional and
non-functional requirements.
2. Consistent: The SRS should be consistent in its use of terminology and formatting, and should be free of
contradictions.
3. Unambiguous: The SRS should be clear and specific, and should avoid using vague or imprecise language.
4. Traceable: The SRS should be traceable to other documents and artifacts, such as use cases and user stories, to
ensure that all requirements are being met.
5. Verifiable: The SRS should be verifiable, which means that the requirements can be tested and validated to
ensure that they are being met.
6. Modifiable: The SRS should be modifiable, so that it can be updated and changed as the software development
process progresses.
7. Prioritized: The SRS should prioritize requirements, so that the most important requirements are addressed
first.
104. 8.Testable: The SRS should be written in a way that allows the requirements to be
tested and validated.
9.Relevant: The SRS should be relevant to the software system that is being
developed, and should not include unnecessary or irrelevant information.
10.High-level and low-level: The SRS should provide both high-level requirements
(such as overall system objectives) and low-level requirements (such as detailed
functional requirements).
11.Human-readable: The SRS should be written in a way that is easy for
non-technical stakeholders to understand and review.
105. Advantages of a SRS
Software SRS establishes the basic for agreement between the
client and the supplier on what the software product will do.
1. A SRS provides a reference for validation of the final product.
2. A high-quality SRS is a prerequisite to high-quality software.
3. A high-quality SRS reduces the development cost.
106. Problems without a SRS Document
The important problems that an organization would face if it does not
develop an SRS document are as follows:
• Without developing the SRS document, the system would not be
implemented according to customer needs.
• Software developers would not know whether what they are
developing is what exactly is required by the customer.
• Without SRS document, it will be very difficult for the maintenance
engineers to understand the functionality of the system.
• It will be very difficult for user document writers to write the users’
manuals properly without understanding the SRS document.
107. Organization of the SRS
IEEE has published guidelines and standards to organize an SRS.
1. Introduction
1.1
1.2
1.3
1.4
1.5
Purpose
Scope
Definition, Acronyms and abbreviations
References
Overview
2. The Overall Description
2.1 Product Perspective
2.1.1 System Interfaces
2.1.2 Interfaces
2.1.3 Hardware Interfaces
2.1.4 Software Interfaces
2.1.5 Communication Interfaces
2.1.6 Memory Constraints
2.1.7 Operations
2.1.8 Site Adaptation Requirements
108. Organization of the SRS
2.2
2.3
2.4
2.5
2.6
Product Functions
User Characteristics
Constraints
Assumptions for dependencies
Apportioning of requirements
3. Specific Requirements
3.1 External Interfaces
3.2 Functions
3.3 Performance requirements
3.4 Logical database requirements
3.5 Design Constraints
3.6 Software System attributes
3.7 Organization of specific requirements
3.8 Additional Comments.
4. Change Management Process
5. Document Approvals
6. Supporting Information
110. REQUIREMENTS VALIDATION
After the completion of SRS document, we would like to check the
document for:
🞑Completeness & consistency
🞑Conformance to standards
🞑Requirements conflicts
🞑Technical errors
🞑Ambiguous requirements
The objective of requirements validation is to certify that the
SRS document is an acceptable document of the system to
be implemented.
112. REQUIREMENTS VALIDATION
Problem actions
• Requirements clarification
• Missing information (find this information from stakeholders)
• Requirements conflicts (Stakeholders must negotiate to resolve
this conflict)
• Unrealistic requirements
• Security issues
114. Software Quality
1
1
4
Our objective of software engineering is to produce good quality maintainable
software in time and within budget.
Here, quality is very important.
Different people understand different meaning of quality like:
•Conformance to requirements
•Fitness for the purpose
•Level of satisfaction
When user uses the product, and finds the product fit for its purpose, he/she
115. Software Quality Assurance
1
1
5
Software quality assurance (SQA) consists of a means of
monitoring the software engineering processes and methods
used to ensure quality.
Every software developers will agree that high-quality software is
an important goal. Once said, "Every program does something
right, it just may not be the thing that we want it to do."
116. Software Quality Assurance
1
1
6
The definition serves to emphasize three important points:
1. Software requirements are the foundation from which quality
is measured. Lack of conformance to requirements is lack of
quality.
2. Specified standards define a set of development criteria that
guide the manner in which software is engineered. If the criteria
are not followed, lack of quality will almost surely result.
3. A set of implicit requirements often goes unmentioned (e.g.,
the desire for ease of use and good maintainability). If software
conforms to its explicit requirements but fails to meet implicit
requirements, software quality is suspect.
117. Verification and Validation
1
1
7
It is the name given to the checking and analysis process.
The purpose is to ensure that the software conforms to its
specifications and meets the need of the customer.
Verification represents the set of activities that are carried
out to confirm that the software correctly implements the
specific functionality.
Validation represents set of activities that ensure that the
software that has built is satisfying the customer
requirements.
118. Verification and Validation
1
1
8
Verification Validation
Are we building the product right? Are we building the right product?
Ensure that the software system meets
all the functionality
Ensure that the functionalities meet
the intended behavior.
Verifications take place first and
include the checking for
documentation, code etc
Validation occurs after verification and
mainly involves the checking of the
overall product.
Done by developers Done by testers
Have static activities as it includes the
reviews, walk-throughs and
inspections to verify that software is
correct or not
Have dynamic activities as it includes
executing the software against the
requirements.
119. Unit 3
• Software Design
The design phase of software development deals with transforming the
customer requirements as described in the SRS documents into a form
implementable using a programming language. The software design
process can be divided into the following three levels or phases of design:
1. Interface Design
2. Architectural Design
3. Detailed Design
121. Elements of a System
1. Architecture: This is the conceptual model that defines the structure, behavior, and views of a
system. We can use flowcharts to represent and illustrate the architecture.
2. Modules: These are components that handle one specific task in a system. A combination of the
modules makes up the system.
3. Components: This provides a particular function or group of related functions. They are made up of
modules.
4. Interfaces: This is the shared boundary across which the components of a system exchange
information and relate.
5. Data: This is the management of the information and data flow.
122. Interface Design
Interface design is the specification of the interaction between a system and its environment. This
phase proceeds at a high level of abstraction with respect to the inner workings of the system i.e,
during interface design, the internal of the systems are completely ignored, and the system is
treated as a black box. Attention is focused on the dialogue between the target system and the
users, devices, and other systems with which it interacts. The design problem statement produced
during the problem analysis step should identify the people, other systems, and devices which are
collectively called agents.
Interface design should include the following details:
1. Precise description of events in the environment, or messages from agents to which the system
must respond.
2. Precise description of the events or messages that the system must produce.
3. Specification of the data, and the formats of the data coming into and going out of the system.
4. Specification of the ordering and timing relationships between incoming events or messages, and
outgoing events or outputs.
123. Architectural Design
Architectural design is the specification of the major components of a system, their
responsibilities, properties, interfaces, and the relationships and interactions
between them. In architectural design, the overall structure of the system is
chosen, but the internal details of major components are ignored. Issues in
architectural design includes:
1. Gross decomposition of the systems into major components.
2. Allocation of functional responsibilities to components.
3. Component Interfaces.
4. Component scaling and performance properties, resource consumption
properties, reliability properties, and so forth.
5. Communication and interaction between components.
124. Detailed Design
Detailed design is the specification of the internal elements of all major system components, their
properties, relationships, processing, and often their algorithms and the data structures. The
detailed design may include:
1. Decomposition of major system components into program units.
2. Allocation of functional responsibilities to units.
3. User interfaces.
4. Unit states and state changes.
5. Data and control interaction between units.
6. Data packaging and implementation, including issues of scope and visibility of program elements.
7. Algorithms and data structures.
125. Design Principles/Concepts
◻ The Three Design Principles are as follows-
1) Problem/Structural Partitioning.
2) Abstraction.
3) Top-down and Bottom-up design.
4) Refinement
5) Modularity
6) Data Structure
7) Software procedures
127. 1-Problem Partitioning.
◻ When solving a small problem, the entire problem can be
tackled at once.
◻ For solving larger problems, the basic principle is the time-
tested principle of “divide and conquer.”
◻ This principle suggests dividing into smaller pieces, so that
each piece can be conquered separately.
◻ For software design, therefore, the goal is to divide the
problem into Manageably Small Pieces that can be solved
separately.
◻ The basic reason behind this strategy is the belief that if the
pieces of a problem are solve separately, the cost of solving
the entire problem is more than the sum of the cost of
solving all the pieces.
128. Problem Partitioning.
can be divided into
two
◻ Problem Partitioning
categories:
❑ Horizontal partitioning
❑ Vertical partitioning
◻ Horizontal Partitioning- Horizontal Partitioning
defines separate branches of modular hierarchy for
each major program function.
to Horizontal
Partitioning
◻ TheSimplest Approach
defines three partitions:
🞑 Input,
🞑 Data transformation (often called processing),
🞑 Output.
129. Problem Partitioning.
CSE@HCST 4/27/2016
◻ Partitioning their architecture horizontally provides a
number of distinct benefits:
🞑 Software that is easier to test.
🞑 Software that is easier to maintain.
🞑 Software that is easier to extend.
🞑 Propagation of fewer side effects.
130. Problem Partitioning.
CSE@HCST 4/27/2016
◻ Vertical Partitioning-
🞑 Verticalpartitioning, often called factoring,
suggests that control should
be
top-down
and
work
in
theprogram
distributed from
structure.
🞑 Top-level modules should perform control
functions and do actual Processing Work.
🞑 Modules that reside low in the structure should be
the workers, Performing all input, compilation,
and output tasks.
132. Modularization
•Modularization is the process of separating the functionality of a
program into independent, interchangeable modules, such that each
contains everything necessary to execute only one aspect of the
desired functionality.
133. Modularization
◻ Modularity Enhances Design Clarity, which in turn eases
implementation, debugging, testing, documenting, and
maintenance of the software product.
◻ Modules that may be created duringprogram
modularizations are-
🞑 Process Support Modules: In these all the functions and data
items that are required to support a particular business process
are grouped together.
🞑 Data Abstraction Modules: These are abstract types that
are created by associating data with processing components.
🞑 Functional Modules: In these all the functions thatcarry
out
similar or closely related tasks are grouped together.
134. Classification of Modules
◻ A Module can be classified into three types depending
on the activating mechanism-
🞑 Incremental Module is activated by an interruption and
can be interrupted by another interrupt during the
execution prior to completion.
🞑 Sequential Module is a module that is referenced by
another module and without interruption of any external
software.
🞑 Parallel Modules are executed in parallel with other
modules.
135. Example: Modularization on a text-to-speech
application
Consider a text-to-speech application that will translate a user’s input text into speech
and read it out loud. It should be able to:
• Parse a user’s input text
• Use a selected computer’s voice to read out the text
• Have a controllers that can speed up or slow down the computer’s speech if the user
chooses
We can apply modularization to this application and decompose it to the following
modules:
• Text-to-speech: Parses the user’s text to be read out loud
• Computer voice: Stores and provides computer voices that the user can choose
• Text speech controller: Controls that speed of the speech that the user chooses
137. Good vs Poor Modular design
Good Modular(Layered) Design Poor Modular(layered) Design
139. Pseudo-Code
◻ “Pseudo” means imitation or false and “code” refers to
the instructions written in a programming language.
◻ Pseudo-code notation can be used in both the
preliminary and detailed design phases.
◻ Using pseudo-code, the designer describes system
characteristics using short, concise English language
phrases that are structured by keywords, such as If-
Then-Else, While-Do, and End.
◻ Keywords and indentation describe the flow of control,
while the English phrases describe processing actions.
140. Pseudo-code
◻ Pseudo-code is also known as Program-design
language or structured English.
◻ A program-design language should have the following
characteristics-
🞑 A fixed syntax of keywords that provide for all structured
constructs, data declarations, and modularity
characteristics.
🞑 Afree syntax of natural language thatdescribes a
processing feature.
🞑 A data-declaration facility.
🞑 A subprogram definition and calling techniques.
142. Advantages of Pseudo-Code
◻ The Various Advantages of Pseudo-Code are as follows-
🞑 Converting a pseudo-code to a programming language
is much easier compared to converting a flowchart or
decision table.
🞑 Compared to a flowchart, it is easier to modify the
pseudo-code of program logic whenever program
modifications are necessary.
🞑 Writing of Pseudo-code involves much less time and
effort than the equivalent flowchart.
🞑 Pseudo-code is easier to write than writing a program
in a programming language because pseudo-code as a
method has only a few rules to follow.
143. Disadvantages of Pseudo-Code
◻ Thevarious disadvantages of Pseudo-code areas
follows-
🞑 In the case of pseudo-code, a graphic
representation of program logic is not available as with
flowcharts.
🞑 There are no standard rules to follow in using pseudo-code.
🞑 Different programmers use their own style of writing
pseudo-code and hence communication problems occur due
to lack of standardization.
🞑 For a beginner, it is more difficult to follow the
logic or write the pseudo-code as compared to
flowcharting.
144. Flowcharts
◻ A flowchart is a convenienttechnique to represent the
flowof
control in a program.
◻ A flowchart is a pictorial representation of an algorithm that uses
symbols to show the operations and decisions to be followed by a
computer in solving a problem.
◻ The actual instructions are written within symbols/boxes using clear
statements.
◻ These boxes are connected by solid lines having arrow marks to
indicate the flow of operation in a sequence.
◻ flowchart should be drawn before writing a program, which in turn
will reduce the number of errors and omissions in the program.
modifications in the
◻ Flowcharts also help during testing and
programs.
147. Example of a Flowchart
CSE@HCST 4/27/2016
◻ As an example, consider an algorithm to find the average of n numbers.
148. Structure Charts
◻ The structure chart is one of the most commonly used
methods for system design.
◻ Structure charts are used during Architectural Design
to document hierarchical structures, parameters,
and interconnections in a system.
◻ It partitions a system into black boxes.
◻ A black box means that functionality is known to the
user without the knowledge of internal design.
◻ Inputs aregiven to a black box and appropriate
outputs are generated by the black box.
149. Structure Charts
CSE@HCST 4/27/2016
◻ This concept reduces complexity because details are hidden
from those who have no need or desire to know.
◻ Thus, systems are easy to construct and easy to maintain.
Here, black boxes are arranged in hierarchical format.
152. Example-1
CSE@HCST 4/27/2016
◻ Ques:-A software system called RMS calculating software reads three
integral numbers from the user in the range between –1000 and +1000 and
determines the root mean square (rms) of the three input numbers and then
displays it.