SlideShare a Scribd company logo
Presented by: Nada Sherief
nsherief@bournemouth.ac.uk
12 May 2014EASE'14| London, UK 1
12 May 2014EASE'14| London, UK 2
 Introduction
 Problem and Motivation
 Research Aims and Objectives
 Related Work
 Research Questions
 Results
 Research Method
 Research Plan
 Acknowledgement
12 May 2014EASE'14| London, UK 3
 The stakeholders of software evaluation include users who
utilize the software to reach their needs and expectations,
i.e. requirements.
 Users’ acceptance and efficient use of the software is a
main subject of evaluation.
 In a dynamic world, users’ acceptance and view of the
software would be also dynamic and would need to be
captured throughout the life time of the software to stay
up-to-date.
 The explicit evaluation feedback from users is a main
mechanism to communicate their perception of the role
and quality of software at runtime.
12 May 2014EASE'14| London, UK 4
 Users’ feedback is meaningful information given by the
users of the software, based on their experience in using it
 Feedback can contribute in identifying problems in the
software, modifying existing requirements or requesting
new additional requirements leading to better users’
acceptance of the software.
 The subject of evaluation feedback is the role of the
system in meeting requirements, we could say that this
feedback enables a user-centric requirements-based
evaluation.
12 May 2014EASE'14| London, UK 5
 Introduction
 Problem and Motivation
 Research Aims and Objectives
 Related Work
 Research Questions
 Results
 Research Method
 Research Plan
 Acknowledgement
12 May 2014EASE'14| London, UK 6
 Traditional evaluation methods, which is mainly a design-
time activity, have a range of limitations including the
following:
◦ Limitations in predicting and simulating the actual context of
use.
◦ The unstructured and varied ways in which users provide their
feedback typically in a natural language.
◦ Capturing the opinion of only an elite group of users.
12 May 2014EASE'14| London, UK 7
 Crowdsourcing is a new form of problem solving, which is
typically online and relies on a large number of people.
 Crowdsourcing is a flexible business model and it requires
a less strict recruitment and contracting process.
 Crowdsourcing is typically used for non-critical tasks and
tasks which naturally require input from the general public
and where the right answer is based on people’s
acceptance and dynamics.
 Our motivating principle is using the power of the crowd
for software evaluation.
12 May 2014EASE'14| London, UK 8
 This has various potential benefits in comparison to
centralized approaches:
◦ An ability to evaluate software while users are using it in
practice (i.e. at runtime).
◦ The access to a large crowd enables fast and scalable
evaluation.
◦ An ability to maintain the evaluation knowledge up-to-date
◦ An access to a wider and diverse set of users and contexts of
use that are unpredictable by designers at design time
◦ Enabling users to introduce new quality attributes and
requirements.
12 May 2014EASE'14| London, UK 9
 Despite the speculated benefits, the literature is still
limited in providing engineering approaches and
foundations to develop crowdsourcing frameworks of
software evaluation.
 Most of the available evaluation approaches that use
crowdsourcing imply the use of the paradigm and use
commercial platforms, such as mTurk
(https://www.mturk.com/ ).
 We advocate that the study and specification of the
structure of crowdsourced feedback have a major impact
on the possibility of getting evaluation knowledge more
meaningful and helpful to the software and developers.
12 May 2014EASE'14| London, UK 10
 Introduction
 Problem and Motivation
 Related Work
 Research Aims and Objectives
 Research Questions
 Results
 Research Method
 Research Plan
 Acknowledgement
12 May 2014EASE'14| London, UK 11
 There are several established approaches where the role of
users is central, such as: User centered design, User
Experience, Agile methodology, and Usability Testing.
 These techniques involve users in the software
development life cycle, including the prototyping and
evaluation.
 These techniques, are expensive and time consuming
when used for highly variable software designed to be
used by a large crowd in contexts unpredictable at design
time.
12 May 2014EASE'14| London, UK 12
 Our work on crowd-sourced evaluation is a kind of end-
user computing in the motivation to provide end users
with the ability to change the system according to their
views to meet their needs.
 In contrast to end-user computing, crowdsourced
evaluation relies on users’ feedback as a way to evolve the
system, or to adapt the system by switching between
configurations at runtime according to the analysis of
collective feedback, instead of relying on one user
feedback.
 This helps cope with large scale system where a large
number of variations exist.
12 May 2014EASE'14| London, UK 13
 We would also give a particular focus on studying the
requirements engineering models which support
variability.
 Models can represent the prominent aspects of the
software that when formally used enables automated
reasoning to derive essential information from the
software employing them.
 Since we are proposing a crowd-sourced evaluation
process, we propose that using these models to represent
stakeholder’s goals, software features, configurations, and
relating users’ feedback to them would be easy to the
users.
 Also, they will provide a systematic assistance to the
developers in extracting new requirements and problems.
12 May 2014EASE'14| London, UK 14
 Recently, more work has been directed towards inventing
systematic methods for representing and obtaining users’
feedback.
◦ Processes for continuous and context-aware user input were
proposed.
◦ An empirical study on users’ involvement for the purpose of
software evaluation and evolution has been conducted.
◦ The crowd feedback was also advocated for shaping software
adaptation.
12 May 2014EASE'14| London, UK 15
 In general, when designing an empirical study in software
engineering, engaging the necessary type of participants
and appropriate number is always a challenge.
 Researchers are often required to perform trade-offs to be
able to perform the study.
 The crowdsourcing paradigm and the mTurk platform have
been used in evaluating the usability of a school’s
website.
◦ The advantages are claimed to be more participants’
involvement, low cost, high speed, and various users’
backgrounds.
◦ The disadvantages include lower quality feedback, less
interactions, more spammers, less focused user groups.
12 May 2014EASE'14| London, UK 16
 A general observation of the current state of the art is that
it treats crowdsourcing as a whole concept without
addressing its peculiarities and its different configurations
 Aspects like the interaction style and the model of
obtained feedback are generally overlooked.
 Our work is a first attempt to address that range of
challenges.
12 May 2014EASE'14| London, UK 17
 Introduction
 Problem and Motivation
 Related Work
 Research Aims and Objectives
 Research Questions
 Results
 Research Method
 Research Plan
 Acknowledgement
12 May 2014EASE'14| London, UK 18
 To enable users to structure their feedback themselves.
This will lead to a more effective management and
richness of their role as evaluators.
 To reuse the crowd-sourced evaluation knowledge to
recommend to the user that they may use a certain
configuration in a certain context according to the
collective evaluation of other users or similar users in
similar contexts in a way similar to collaborative filtering.
 We plan for our approach to intelligently enhance the
system, for example by learning user interactions, or what
problems are solved by which features in order to resolve
similar cases.
12 May 2014EASE'14| London, UK 19
 To develop a novel user-driven feedback modelling
language that enables users to define their feedback
structures, acquisition methods.
 To develop a crowdsourced engineering framework
designed for both users and engineers to model and
correlate collective users’ feedback, context, and software
features at runtime for the purpose of software evaluation.
 To implement a plug-in tool, this can be used in parallel
with a main application to utilize the developed
engineering framework. Since our modelling framework is
intended to be used by end-users, therefore an
implementation will add the “look and feel” to the whole
story.
12 May 2014EASE'14| London, UK 20
 Introduction
 Problem and Motivation
 Related Work
 Research Aims and Objectives
 Research Questions
 Results
 Research Method
 Research Plan
 Acknowledgement
12 May 2014EASE'14| London, UK 21
 In the following we discuss the research questions (RQ) of
this work.
 These questions contribute to the bigger picture by trying
to address the important aspects that can be incorporated
in the new user-driven feedback modelling language for
the purpose of continuously evaluating the software while
being in use (i.e. at runtime).
◦ RQ1) What is the current support in the literature for the
acquisition and structuring of users’ feedback in the context of
software evaluation?
◦ RQ2) What are the key feedback qualities that can help
developers extract requirements knowledge from the collected
end-user feedback?
12 May 2014EASE'14| London, UK 22
◦ RQ3) What are the methods and mechanisms that could be
adopted to reuse other’s feedback or feedback structures?
How can they be adapted in our framework?
◦ RQ4) What are the key aspects that could be included to our
framework to increase user willingness to actively participate
in such a new role as evaluators and how to support that by
software tools?
◦ RQ5) What are the validations that must be developed to
assess whether the proposed approach enhances the
obtainment and analysis of user feedback? How does this help
in planning the software’s adaptation and evolution?
12 May 2014EASE'14| London, UK 23
 Introduction
 Problem and Motivation
 Related Work
 Research Aims and Objectives
 Research Questions
 Results
 Research Method
 Research Plan
 Acknowledgement
12 May 2014EASE'14| London, UK 24
 A multi-sessions focus group study was conducted, which
is a popular technique of qualitative research in software
engineering.
 The main purpose of this focus group was to elicit
requirements from various stakeholders to understand
how crowdsourcing should be practiced in terms of
feedback gathering.
 It was also used to explore the opportunities to use
crowdsourcing mechanisms to obtain user feedbacks
during software development.
 Our main aim was to collect some insights from both users
and developers that can inform our research questions.
12 May 2014EASE'14| London, UK 25
 Both junior and senior software developers were invited to
join the first session where the emphasis of this session
was to:
◦ understand how software developers normally gather user
feedback
◦ how they think a good feedback should be structured
◦ how they collaborate and communicate with users in the
development as this could inform the way we design feedback
requests
12 May 2014EASE'14| London, UK 26
 The second session was conducted with regular software
users who are used to provide feedback.
 The emphasis of this session was to:
◦ explore the ways that users would like feedback requests to
look like
◦ what drives them to provide feedback
◦ their concerns for not getting involved enough and also for
being involved more than what they expect
◦ This session was also used to investigate their motivations to
take part in projects and learn their experience from that
participation
12 May 2014EASE'14| London, UK 27
 Following the approach presented in, five thematic areas
were formed.
 The five thematic areas are: subject, structure, reusability,
engagement, and involvement.
 The five thematic areas with their brief description,
examples, and relation to research questions are
summarized in the following table:
12 May 2014EASE'14| London, UK 28
Thematic Area Theme Example
Subject Method “Snapshots, Text, or Audio”
Clarity “reach a clear problem specification”
Specificity “specific to the software’s features”
Structure Timing “real-time feedback”
Level of Detail “giving detailed feedbacks”
Measurement “a group of predefined keywords”
“structured in a specific way”
Specificity “give feedback to specific problems”
Reusability Storage “There can be a bank of statements “
Rated “users can view feedbacks and rate how much they
agree/ disagree with the feedback”,
Statistical “statistics should occur to represent how much a
feedback was meaningful, useful/useless”
Variability
Awareness
“the system can increase user awareness by giving him a
list of friends’ experiences with features”
12 May 2014EASE'14| London, UK 29
12 May 2014EASE'14| London, UK 30
Thematic Area Theme Example
Engagement Recognition “a friendly confirmation for participation”
“more personalized options for feedback”
Value “it encourages users to give feedback if they can meet
with analysts to discuss problems in some ways”.
Channel “the interactions should be very simple”
Transparency “it would increase the users’ trust and willingness to
give feedback if they know the cycle of how their
feedback will be used”
Involvement Privacy “would like to stay anonymous”
“it is important if the user can control who is able to see
his feedback”.
Response “the software’s speed of response to my feedback
affects my willingness to give feedback”
Support “there can be videos to explain to the users what they
can do (in order to provide feedback)”
Rewards “the system should do some favor to the user who gave
useful feedback that helped enhance the system”
“trying new features or versions for free if the user gave
good feedback”
“users who gave useful feedback can be accredited to
the use to increase their reputations”
 Introduction
 Problem and Motivation
 Related Work
 Research Aims and Objectives
 Research Questions
 Results
 Research Method
 Research Plan
 Acknowledgement
12 May 2014EASE'14| London, UK 31
 We will design a questionnaire to gather more relationships
between the concepts we have concluded from the focus groups
and to confirm the results from a larger population for additional
tackling of RQ1.
 This will increase the formality and validity of our approach, and
help us reach more generalizable conclusions.
 We will design and conduct a set of the interviews with engineers
to understand and get insights on what makes users’ feedback
useful to the evolution and maintenance decisions for further
focus on RQ2.
 Data will be analyzed using categorization and thematic analysis.
12 May 2014EASE'14| London, UK 32
 To address RQ5, one of the key techniques we may use during
our validation phase is Controlled Experiments.
 Controlled experiments have several advantages. They will allow
us to:
◦ conduct well-defined, focused studies that produce statistically
meaningful results;
◦ to capture relationships between users' context of and different
usages of the software;
◦ provide good explanations to why results do and do not occur;
◦ capture important variables and different relationships between
variables, such as the ease of use and/or user incentives and their
ability to provide good feedback.
12 May 2014EASE'14| London, UK 33
 There are drawbacks to controlled experiments as any
empirical method in software engineering.
 Yet the better the experiment design the better results and
relationships can be captured.
12 May 2014EASE'14| London, UK 34
 Introduction
 Problem and Motivation
 Related Work
 Research Aims and Objectives
 Research Questions
 Results
 Research Method
 Research Plan
 Acknowledgement
12 May 2014EASE'14| London, UK 35
 We summarize our research plan and milestones for the
whole research project. The milestones are as follows:
◦ First, we will complete the empirical investigation to using
questionnaires and interviews as mentioned earlier.
◦ Second, after analyzing the data from all methods, we will
reach a set of conclusions that will be incorporated to the
design of our aimed crowdsourced engineering framework for
software evaluation.
12 May 2014EASE'14| London, UK 36
◦ Third, we will begin in designing our prototype tool. It will put
the developed framework into practice, and will include both
the verification of the designed model and an easy to use user
interface to allow for feedback modeling and acquisition.
◦ Finally, we will be applying our approach and prototype in
practice for both validation and refinements, and reporting the
results.
12 May 2014EASE'14| London, UK 37
12 May 2014EASE'14| London, UK 38Figure 1. Flowchart for the research plan
 Introduction
 Problem and Motivation
 Related Work
 Research Aims and Objectives
 Research Questions
 Results
 Research Method
 Research Plan
 Acknowledgement
12 May 2014EASE'14| London, UK 39
 I would like to thank my supervisors Dr Raian Ali, and Prof.
Keith Phalp for their invaluable feedback and support.
 The research was supported by an FP7 Marie Curie CIG
grant (the SOCIAD Project) and by Bournemouth University
– Fusion Investment Fund and the Graduate School PGR
Development Fund.
12 May 2014EASE'14| London, UK 40
12 May 2014EASE'14| London, UK 41
12 May 2014EASE'14| London, UK 42

More Related Content

PPTX
Crowdsourcing Software Evaluation
PPT
Webinar Agile Presentation V.1.0
PDF
2015-11-11 research seminar
PPT
When Users Becom Collaborators: Towards Continuous and Context-Aware User Input
PDF
Content package - Mobile computing
PPT
Usability Evaluation in Educational Technology
PDF
Edc2013 compliance conundrum-alperin
PDF
IRJET- Analysis of Rating Difference and User Interest
Crowdsourcing Software Evaluation
Webinar Agile Presentation V.1.0
2015-11-11 research seminar
When Users Becom Collaborators: Towards Continuous and Context-Aware User Input
Content package - Mobile computing
Usability Evaluation in Educational Technology
Edc2013 compliance conundrum-alperin
IRJET- Analysis of Rating Difference and User Interest

What's hot (7)

PPT
Brunel opensourcing 1
PDF
Improving ERS usability through user-system collaboration
PDF
User Experience Testing
DOCX
ECE695DVisualAnalyticsprojectproposal (2)
PPTX
Accommodating Openness Requirements in Software Platforms: A goal-Oriented Ap...
PPTX
Process, design, implementation and evaluation of a mobile collaboration layer
PPTX
Elv 14-understanding agile software development practices using shared mental
Brunel opensourcing 1
Improving ERS usability through user-system collaboration
User Experience Testing
ECE695DVisualAnalyticsprojectproposal (2)
Accommodating Openness Requirements in Software Platforms: A goal-Oriented Ap...
Process, design, implementation and evaluation of a mobile collaboration layer
Elv 14-understanding agile software development practices using shared mental
Ad

Viewers also liked (16)

PPT
Beyond Usability
PPT
Social media usage in HR departments in Greece
DOCX
Survey questionnaire regarding usage and benefits of e
PDF
Analysis of Facebook Usage by College Students in Thailand
PDF
Market Research Project - A study of the Usage & Attitude of Ice Cream Consum...
PAGES
Media questionnaire
DOCX
Questionnaire
PPTX
Usage And Attitude Survey
PDF
A Blog is not a Content Strategy
PPTX
Los 100 cientificos que han aportado a la ciencia
DOCX
Questionnaire
PPTX
How News Channel Was Benefited By NAS Deployment
DOCX
Questionnaire Social media as educational tool
PDF
A Content Strategy Roadmap
PDF
Content Curation Scorecard for Content Marketing Success
PDF
Content Strategy for Everything
Beyond Usability
Social media usage in HR departments in Greece
Survey questionnaire regarding usage and benefits of e
Analysis of Facebook Usage by College Students in Thailand
Market Research Project - A study of the Usage & Attitude of Ice Cream Consum...
Media questionnaire
Questionnaire
Usage And Attitude Survey
A Blog is not a Content Strategy
Los 100 cientificos que han aportado a la ciencia
Questionnaire
How News Channel Was Benefited By NAS Deployment
Questionnaire Social media as educational tool
A Content Strategy Roadmap
Content Curation Scorecard for Content Marketing Success
Content Strategy for Everything
Ad

Similar to Software evaluation via users’ feedback at runtime (20)

PDF
How can User Experience and Business Analysis work well together?
PDF
Literature_Review_CA2_N00147768
PPTX
Crowdsourcing
PDF
My cv 2000 - 2016
PDF
PDF
A Conceptual Design And Evaluation Framework For Mobile Persuasive Health Tec...
PDF
ManganoHoeeg - Virtual Supermarket Designer POSTER
PDF
A Critical Evaluation of Popular UX Frameworks Relevant to E-Health Apps
PDF
The International Journal of Computational Science, Information Technology an...
PDF
The International Journal of Computational Science, Information Technology an...
PDF
THE USABILITY METRICS FOR USER EXPERIENCE
PPTX
Mobile Healthcare App
PDF
UX Design Process | Sample Proposal
PDF
An Exploratory Study of Usability Practice from User-Centered Design View: M...
PPTX
Usability Primer
PDF
Guide to interactive assessments development services.pdf
PDF
PhD Thesis Defense - Enhancing Software Quality and Quality of Experience thr...
PDF
Usability Testing - A Holistic Guide.pdf
PDF
Agile Usability
PDF
U1 Group Credentials 2015
How can User Experience and Business Analysis work well together?
Literature_Review_CA2_N00147768
Crowdsourcing
My cv 2000 - 2016
A Conceptual Design And Evaluation Framework For Mobile Persuasive Health Tec...
ManganoHoeeg - Virtual Supermarket Designer POSTER
A Critical Evaluation of Popular UX Frameworks Relevant to E-Health Apps
The International Journal of Computational Science, Information Technology an...
The International Journal of Computational Science, Information Technology an...
THE USABILITY METRICS FOR USER EXPERIENCE
Mobile Healthcare App
UX Design Process | Sample Proposal
An Exploratory Study of Usability Practice from User-Centered Design View: M...
Usability Primer
Guide to interactive assessments development services.pdf
PhD Thesis Defense - Enhancing Software Quality and Quality of Experience thr...
Usability Testing - A Holistic Guide.pdf
Agile Usability
U1 Group Credentials 2015

More from Engineering and Social Informatics (ESOTICS) (20)

PPTX
Digital addiction and what you need to know
PDF
Pragmatic requirements for adaptive systems a goal driven modeling and analys...
PDF
Socially augmented software empowering software operation through social cont...
PDF
Mitigating circumstances in cyber crime
PPTX
The design of software based peer groups to combat digital addiction
PPTX
Crowdsourcing transparency requirements through structured feedback and social
PDF
Persuasive and culture aware feedback acquisition
PDF
Adaptive software based feedback acquisition a personas-based design
PDF
A modelling language for transparency requirements in business information sy...
PDF
Modelling and analysing contextual failures for dependability requirements
PDF
Engineering software based motivation a persona-based approach
PDF
REfine a gamifiedplatform for participatory requirements engineering
PPTX
The Emerging Requirement for Digital Addiction Labels
PPTX
Gamification for volunteer cloud computing
PPTX
Crowd centric requirements engineering - ra
PPTX
Towards a code of ethics for gamification at enterprise po em
PPTX
Consideration in software mediated social interaction
PPTX
PPTX
The design of adaptive acquisition of users feedback an empirical study (rcis...
PPT
Socially-Driven Software Adaptation
Digital addiction and what you need to know
Pragmatic requirements for adaptive systems a goal driven modeling and analys...
Socially augmented software empowering software operation through social cont...
Mitigating circumstances in cyber crime
The design of software based peer groups to combat digital addiction
Crowdsourcing transparency requirements through structured feedback and social
Persuasive and culture aware feedback acquisition
Adaptive software based feedback acquisition a personas-based design
A modelling language for transparency requirements in business information sy...
Modelling and analysing contextual failures for dependability requirements
Engineering software based motivation a persona-based approach
REfine a gamifiedplatform for participatory requirements engineering
The Emerging Requirement for Digital Addiction Labels
Gamification for volunteer cloud computing
Crowd centric requirements engineering - ra
Towards a code of ethics for gamification at enterprise po em
Consideration in software mediated social interaction
The design of adaptive acquisition of users feedback an empirical study (rcis...
Socially-Driven Software Adaptation

Recently uploaded (20)

PDF
AutoCAD Professional Crack 2025 With License Key
PPTX
Why Generative AI is the Future of Content, Code & Creativity?
PPTX
Reimagine Home Health with the Power of Agentic AI​
PDF
CCleaner Pro 6.38.11537 Crack Final Latest Version 2025
PDF
iTop VPN Free 5.6.0.5262 Crack latest version 2025
PDF
Design an Analysis of Algorithms II-SECS-1021-03
PDF
CapCut Video Editor 6.8.1 Crack for PC Latest Download (Fully Activated) 2025
PDF
Internet Downloader Manager (IDM) Crack 6.42 Build 41
PPTX
Weekly report ppt - harsh dattuprasad patel.pptx
PDF
Digital Systems & Binary Numbers (comprehensive )
PDF
Wondershare Filmora 15 Crack With Activation Key [2025
PPTX
Computer Software and OS of computer science of grade 11.pptx
PDF
Navsoft: AI-Powered Business Solutions & Custom Software Development
PDF
EN-Survey-Report-SAP-LeanIX-EA-Insights-2025.pdf
PDF
Complete Guide to Website Development in Malaysia for SMEs
PDF
Website Design Services for Small Businesses.pdf
PDF
Download FL Studio Crack Latest version 2025 ?
PDF
Salesforce Agentforce AI Implementation.pdf
PDF
iTop VPN Crack Latest Version Full Key 2025
PDF
17 Powerful Integrations Your Next-Gen MLM Software Needs
AutoCAD Professional Crack 2025 With License Key
Why Generative AI is the Future of Content, Code & Creativity?
Reimagine Home Health with the Power of Agentic AI​
CCleaner Pro 6.38.11537 Crack Final Latest Version 2025
iTop VPN Free 5.6.0.5262 Crack latest version 2025
Design an Analysis of Algorithms II-SECS-1021-03
CapCut Video Editor 6.8.1 Crack for PC Latest Download (Fully Activated) 2025
Internet Downloader Manager (IDM) Crack 6.42 Build 41
Weekly report ppt - harsh dattuprasad patel.pptx
Digital Systems & Binary Numbers (comprehensive )
Wondershare Filmora 15 Crack With Activation Key [2025
Computer Software and OS of computer science of grade 11.pptx
Navsoft: AI-Powered Business Solutions & Custom Software Development
EN-Survey-Report-SAP-LeanIX-EA-Insights-2025.pdf
Complete Guide to Website Development in Malaysia for SMEs
Website Design Services for Small Businesses.pdf
Download FL Studio Crack Latest version 2025 ?
Salesforce Agentforce AI Implementation.pdf
iTop VPN Crack Latest Version Full Key 2025
17 Powerful Integrations Your Next-Gen MLM Software Needs

Software evaluation via users’ feedback at runtime

  • 1. Presented by: Nada Sherief nsherief@bournemouth.ac.uk 12 May 2014EASE'14| London, UK 1
  • 2. 12 May 2014EASE'14| London, UK 2
  • 3.  Introduction  Problem and Motivation  Research Aims and Objectives  Related Work  Research Questions  Results  Research Method  Research Plan  Acknowledgement 12 May 2014EASE'14| London, UK 3
  • 4.  The stakeholders of software evaluation include users who utilize the software to reach their needs and expectations, i.e. requirements.  Users’ acceptance and efficient use of the software is a main subject of evaluation.  In a dynamic world, users’ acceptance and view of the software would be also dynamic and would need to be captured throughout the life time of the software to stay up-to-date.  The explicit evaluation feedback from users is a main mechanism to communicate their perception of the role and quality of software at runtime. 12 May 2014EASE'14| London, UK 4
  • 5.  Users’ feedback is meaningful information given by the users of the software, based on their experience in using it  Feedback can contribute in identifying problems in the software, modifying existing requirements or requesting new additional requirements leading to better users’ acceptance of the software.  The subject of evaluation feedback is the role of the system in meeting requirements, we could say that this feedback enables a user-centric requirements-based evaluation. 12 May 2014EASE'14| London, UK 5
  • 6.  Introduction  Problem and Motivation  Research Aims and Objectives  Related Work  Research Questions  Results  Research Method  Research Plan  Acknowledgement 12 May 2014EASE'14| London, UK 6
  • 7.  Traditional evaluation methods, which is mainly a design- time activity, have a range of limitations including the following: ◦ Limitations in predicting and simulating the actual context of use. ◦ The unstructured and varied ways in which users provide their feedback typically in a natural language. ◦ Capturing the opinion of only an elite group of users. 12 May 2014EASE'14| London, UK 7
  • 8.  Crowdsourcing is a new form of problem solving, which is typically online and relies on a large number of people.  Crowdsourcing is a flexible business model and it requires a less strict recruitment and contracting process.  Crowdsourcing is typically used for non-critical tasks and tasks which naturally require input from the general public and where the right answer is based on people’s acceptance and dynamics.  Our motivating principle is using the power of the crowd for software evaluation. 12 May 2014EASE'14| London, UK 8
  • 9.  This has various potential benefits in comparison to centralized approaches: ◦ An ability to evaluate software while users are using it in practice (i.e. at runtime). ◦ The access to a large crowd enables fast and scalable evaluation. ◦ An ability to maintain the evaluation knowledge up-to-date ◦ An access to a wider and diverse set of users and contexts of use that are unpredictable by designers at design time ◦ Enabling users to introduce new quality attributes and requirements. 12 May 2014EASE'14| London, UK 9
  • 10.  Despite the speculated benefits, the literature is still limited in providing engineering approaches and foundations to develop crowdsourcing frameworks of software evaluation.  Most of the available evaluation approaches that use crowdsourcing imply the use of the paradigm and use commercial platforms, such as mTurk (https://www.mturk.com/ ).  We advocate that the study and specification of the structure of crowdsourced feedback have a major impact on the possibility of getting evaluation knowledge more meaningful and helpful to the software and developers. 12 May 2014EASE'14| London, UK 10
  • 11.  Introduction  Problem and Motivation  Related Work  Research Aims and Objectives  Research Questions  Results  Research Method  Research Plan  Acknowledgement 12 May 2014EASE'14| London, UK 11
  • 12.  There are several established approaches where the role of users is central, such as: User centered design, User Experience, Agile methodology, and Usability Testing.  These techniques involve users in the software development life cycle, including the prototyping and evaluation.  These techniques, are expensive and time consuming when used for highly variable software designed to be used by a large crowd in contexts unpredictable at design time. 12 May 2014EASE'14| London, UK 12
  • 13.  Our work on crowd-sourced evaluation is a kind of end- user computing in the motivation to provide end users with the ability to change the system according to their views to meet their needs.  In contrast to end-user computing, crowdsourced evaluation relies on users’ feedback as a way to evolve the system, or to adapt the system by switching between configurations at runtime according to the analysis of collective feedback, instead of relying on one user feedback.  This helps cope with large scale system where a large number of variations exist. 12 May 2014EASE'14| London, UK 13
  • 14.  We would also give a particular focus on studying the requirements engineering models which support variability.  Models can represent the prominent aspects of the software that when formally used enables automated reasoning to derive essential information from the software employing them.  Since we are proposing a crowd-sourced evaluation process, we propose that using these models to represent stakeholder’s goals, software features, configurations, and relating users’ feedback to them would be easy to the users.  Also, they will provide a systematic assistance to the developers in extracting new requirements and problems. 12 May 2014EASE'14| London, UK 14
  • 15.  Recently, more work has been directed towards inventing systematic methods for representing and obtaining users’ feedback. ◦ Processes for continuous and context-aware user input were proposed. ◦ An empirical study on users’ involvement for the purpose of software evaluation and evolution has been conducted. ◦ The crowd feedback was also advocated for shaping software adaptation. 12 May 2014EASE'14| London, UK 15
  • 16.  In general, when designing an empirical study in software engineering, engaging the necessary type of participants and appropriate number is always a challenge.  Researchers are often required to perform trade-offs to be able to perform the study.  The crowdsourcing paradigm and the mTurk platform have been used in evaluating the usability of a school’s website. ◦ The advantages are claimed to be more participants’ involvement, low cost, high speed, and various users’ backgrounds. ◦ The disadvantages include lower quality feedback, less interactions, more spammers, less focused user groups. 12 May 2014EASE'14| London, UK 16
  • 17.  A general observation of the current state of the art is that it treats crowdsourcing as a whole concept without addressing its peculiarities and its different configurations  Aspects like the interaction style and the model of obtained feedback are generally overlooked.  Our work is a first attempt to address that range of challenges. 12 May 2014EASE'14| London, UK 17
  • 18.  Introduction  Problem and Motivation  Related Work  Research Aims and Objectives  Research Questions  Results  Research Method  Research Plan  Acknowledgement 12 May 2014EASE'14| London, UK 18
  • 19.  To enable users to structure their feedback themselves. This will lead to a more effective management and richness of their role as evaluators.  To reuse the crowd-sourced evaluation knowledge to recommend to the user that they may use a certain configuration in a certain context according to the collective evaluation of other users or similar users in similar contexts in a way similar to collaborative filtering.  We plan for our approach to intelligently enhance the system, for example by learning user interactions, or what problems are solved by which features in order to resolve similar cases. 12 May 2014EASE'14| London, UK 19
  • 20.  To develop a novel user-driven feedback modelling language that enables users to define their feedback structures, acquisition methods.  To develop a crowdsourced engineering framework designed for both users and engineers to model and correlate collective users’ feedback, context, and software features at runtime for the purpose of software evaluation.  To implement a plug-in tool, this can be used in parallel with a main application to utilize the developed engineering framework. Since our modelling framework is intended to be used by end-users, therefore an implementation will add the “look and feel” to the whole story. 12 May 2014EASE'14| London, UK 20
  • 21.  Introduction  Problem and Motivation  Related Work  Research Aims and Objectives  Research Questions  Results  Research Method  Research Plan  Acknowledgement 12 May 2014EASE'14| London, UK 21
  • 22.  In the following we discuss the research questions (RQ) of this work.  These questions contribute to the bigger picture by trying to address the important aspects that can be incorporated in the new user-driven feedback modelling language for the purpose of continuously evaluating the software while being in use (i.e. at runtime). ◦ RQ1) What is the current support in the literature for the acquisition and structuring of users’ feedback in the context of software evaluation? ◦ RQ2) What are the key feedback qualities that can help developers extract requirements knowledge from the collected end-user feedback? 12 May 2014EASE'14| London, UK 22
  • 23. ◦ RQ3) What are the methods and mechanisms that could be adopted to reuse other’s feedback or feedback structures? How can they be adapted in our framework? ◦ RQ4) What are the key aspects that could be included to our framework to increase user willingness to actively participate in such a new role as evaluators and how to support that by software tools? ◦ RQ5) What are the validations that must be developed to assess whether the proposed approach enhances the obtainment and analysis of user feedback? How does this help in planning the software’s adaptation and evolution? 12 May 2014EASE'14| London, UK 23
  • 24.  Introduction  Problem and Motivation  Related Work  Research Aims and Objectives  Research Questions  Results  Research Method  Research Plan  Acknowledgement 12 May 2014EASE'14| London, UK 24
  • 25.  A multi-sessions focus group study was conducted, which is a popular technique of qualitative research in software engineering.  The main purpose of this focus group was to elicit requirements from various stakeholders to understand how crowdsourcing should be practiced in terms of feedback gathering.  It was also used to explore the opportunities to use crowdsourcing mechanisms to obtain user feedbacks during software development.  Our main aim was to collect some insights from both users and developers that can inform our research questions. 12 May 2014EASE'14| London, UK 25
  • 26.  Both junior and senior software developers were invited to join the first session where the emphasis of this session was to: ◦ understand how software developers normally gather user feedback ◦ how they think a good feedback should be structured ◦ how they collaborate and communicate with users in the development as this could inform the way we design feedback requests 12 May 2014EASE'14| London, UK 26
  • 27.  The second session was conducted with regular software users who are used to provide feedback.  The emphasis of this session was to: ◦ explore the ways that users would like feedback requests to look like ◦ what drives them to provide feedback ◦ their concerns for not getting involved enough and also for being involved more than what they expect ◦ This session was also used to investigate their motivations to take part in projects and learn their experience from that participation 12 May 2014EASE'14| London, UK 27
  • 28.  Following the approach presented in, five thematic areas were formed.  The five thematic areas are: subject, structure, reusability, engagement, and involvement.  The five thematic areas with their brief description, examples, and relation to research questions are summarized in the following table: 12 May 2014EASE'14| London, UK 28
  • 29. Thematic Area Theme Example Subject Method “Snapshots, Text, or Audio” Clarity “reach a clear problem specification” Specificity “specific to the software’s features” Structure Timing “real-time feedback” Level of Detail “giving detailed feedbacks” Measurement “a group of predefined keywords” “structured in a specific way” Specificity “give feedback to specific problems” Reusability Storage “There can be a bank of statements “ Rated “users can view feedbacks and rate how much they agree/ disagree with the feedback”, Statistical “statistics should occur to represent how much a feedback was meaningful, useful/useless” Variability Awareness “the system can increase user awareness by giving him a list of friends’ experiences with features” 12 May 2014EASE'14| London, UK 29
  • 30. 12 May 2014EASE'14| London, UK 30 Thematic Area Theme Example Engagement Recognition “a friendly confirmation for participation” “more personalized options for feedback” Value “it encourages users to give feedback if they can meet with analysts to discuss problems in some ways”. Channel “the interactions should be very simple” Transparency “it would increase the users’ trust and willingness to give feedback if they know the cycle of how their feedback will be used” Involvement Privacy “would like to stay anonymous” “it is important if the user can control who is able to see his feedback”. Response “the software’s speed of response to my feedback affects my willingness to give feedback” Support “there can be videos to explain to the users what they can do (in order to provide feedback)” Rewards “the system should do some favor to the user who gave useful feedback that helped enhance the system” “trying new features or versions for free if the user gave good feedback” “users who gave useful feedback can be accredited to the use to increase their reputations”
  • 31.  Introduction  Problem and Motivation  Related Work  Research Aims and Objectives  Research Questions  Results  Research Method  Research Plan  Acknowledgement 12 May 2014EASE'14| London, UK 31
  • 32.  We will design a questionnaire to gather more relationships between the concepts we have concluded from the focus groups and to confirm the results from a larger population for additional tackling of RQ1.  This will increase the formality and validity of our approach, and help us reach more generalizable conclusions.  We will design and conduct a set of the interviews with engineers to understand and get insights on what makes users’ feedback useful to the evolution and maintenance decisions for further focus on RQ2.  Data will be analyzed using categorization and thematic analysis. 12 May 2014EASE'14| London, UK 32
  • 33.  To address RQ5, one of the key techniques we may use during our validation phase is Controlled Experiments.  Controlled experiments have several advantages. They will allow us to: ◦ conduct well-defined, focused studies that produce statistically meaningful results; ◦ to capture relationships between users' context of and different usages of the software; ◦ provide good explanations to why results do and do not occur; ◦ capture important variables and different relationships between variables, such as the ease of use and/or user incentives and their ability to provide good feedback. 12 May 2014EASE'14| London, UK 33
  • 34.  There are drawbacks to controlled experiments as any empirical method in software engineering.  Yet the better the experiment design the better results and relationships can be captured. 12 May 2014EASE'14| London, UK 34
  • 35.  Introduction  Problem and Motivation  Related Work  Research Aims and Objectives  Research Questions  Results  Research Method  Research Plan  Acknowledgement 12 May 2014EASE'14| London, UK 35
  • 36.  We summarize our research plan and milestones for the whole research project. The milestones are as follows: ◦ First, we will complete the empirical investigation to using questionnaires and interviews as mentioned earlier. ◦ Second, after analyzing the data from all methods, we will reach a set of conclusions that will be incorporated to the design of our aimed crowdsourced engineering framework for software evaluation. 12 May 2014EASE'14| London, UK 36
  • 37. ◦ Third, we will begin in designing our prototype tool. It will put the developed framework into practice, and will include both the verification of the designed model and an easy to use user interface to allow for feedback modeling and acquisition. ◦ Finally, we will be applying our approach and prototype in practice for both validation and refinements, and reporting the results. 12 May 2014EASE'14| London, UK 37
  • 38. 12 May 2014EASE'14| London, UK 38Figure 1. Flowchart for the research plan
  • 39.  Introduction  Problem and Motivation  Related Work  Research Aims and Objectives  Research Questions  Results  Research Method  Research Plan  Acknowledgement 12 May 2014EASE'14| London, UK 39
  • 40.  I would like to thank my supervisors Dr Raian Ali, and Prof. Keith Phalp for their invaluable feedback and support.  The research was supported by an FP7 Marie Curie CIG grant (the SOCIAD Project) and by Bournemouth University – Fusion Investment Fund and the Graduate School PGR Development Fund. 12 May 2014EASE'14| London, UK 40
  • 41. 12 May 2014EASE'14| London, UK 41
  • 42. 12 May 2014EASE'14| London, UK 42