SlideShare a Scribd company logo
DRONE: Predicting Priority of Reported Bugs
by Multi-Factor Analysis
Yuan Tian1, David Lo1, Chengnian Sun2
1Singapore Management University
2National University of Singapore
Bug tracking systems allow developers to prioritize
which bugs are to be fixed first
¡  Manual process
¡  Depend on other bugs
¡  Time consuming
What is priority and when it is assigned?
New
Assigned
300 reports to triage daily!
Validity Check,
Duplicate Check,
Priority Assignment
Developer Assignment
Bug
Triager
2
Priority Vs Severity
3
“Severity is assigned by customers [users] while
priority is provided by developers . . . customer
[user] reported severity does impact the developer
when they assign a priority level to a bug report,
but it’s not the only consideration. For example, it
may be a critical issue for a particular reporter that
a bug is fixed but it still may not be the right thing
for the eclipse team to fix.”
Eclipse PMC Member
Importance: P5 (lowest priority level), major (high severity)
q  Background
q  Approach
Overall Framework
Features
Classification Module
q  Experiment
Dataset
Research Questions
Results
q Conclusion
Outline
4
q  Background
q  Approach
Overall Framework
Features
Classification Module
q  Experiment
Dataset
Research Questions
Results
q Conclusion
Outline
5
(1) summary
(2) description
(3) product
(4) component
(5) author
(6) severity
(7) priority.
1
2
3
4
5
6
Time-related Info.
7
6
Bug Report
q  Background
q  Approach
Overall Framework
Features
Classification Module
q  Experiment
Dataset
Research Questions
Results
q Conclusion
Outline
7
Training
Reports
Related Reports
Model
Predicted Priority
Testing
Reports
Temporal Textual
Author
Severity
Product
Model Builder
Model
Application
Feature Extraction Module
Classifier Module
Training
Phase
Testing
Phase
Overall Framework
8
Temporal Factor
TMP1 Number of bugs reported within 7 days before the reporting of BR.
TMP2 Number of bugs reported with the same severity within 7 days
before the reporting of BR.
TMP3 Number of bugs reported with the same or higher severity within 7
days before the reporting of BR.
TMP4-6 The same as TMP1-3 except the time duration is 1 day.
TMP7-9 The same as TMP1-3 except the time duration is 3 days.
TMP10-12 The same as TMP1-3 except the time duration is 30 days.
Textual Factor
TXT1-n Stemmed words from the description field of BR excluding stop
words.
Severity Factor
SEV BR’s severity field. 9
Author Factor
AUT1 Mean priority of all bugs reports made by the author of BR prior to the
reporting of BR.
AUT2 Median priority of all bugs reports made by the author of BR prior to the
reporting of BR.
AUT3 The number of bug reports made by the author of BR prior to the reporting
of BR.
Related Reports Factor [REP-, Sun et al.]
REP1 Mean priority of the top-20 most similar bug reports to BR as measured
using REP- prior to the reporting of BR.
REP2 Median priority of the top-20 most similar bug reports to BR as measured
using REP prior to the reporting of BR.
REP3-4 The same as REP1-2 except only the top 10 bug reports are considered.
REP5-6 The same as REP1-2 except only the top 5 bug reports are considered.
REP7-8 The same as REP1-2 except only the top 3 bug reports are considered.
REP9-10 The same as REP1-2 except only the top 1 bug reports are considered.
10
Product Factor
PRO1 BR’s product field Note: categorical feature
PRO2 Number of bug reports made for the same product as that of BR prior to
the reporting of BR.
PRO3 Number of bug reports made for the same product of the same severity as
that of BR prior to the reporting of BR.
PRO4 Number of bug reports made for the same product of the same or higher
severity as those of BR prior to the reporting of BR.
PRO5 Proportion of bug reports made for the same product as that of BR prior to
the reporting of BR that are assigned priority P1.
PRO6-9 The same as PRO5 except they are for priority P2-P5 respectively.
PRO10 Mean priority of bug reports made for the same product as that of BR prior
to the reporting of BR.
PRO11 Median priority of bug reports mad for the same product as that of BR prior
to the reporting of BR.
PRO12-22 The same as PRO1-11 except they are for the component field of BR.
11
Training
Reports
Related Reports
Model
Predicted Priority
Testing
Reports
Temporal Textual
Author
Severity
Product
Model Builder
Model
Application
Feature Extraction Module
Classifier Module
Training
Phase
Testing
Phase
Overall Framework
12
Model
Building
Data
Training
Features
Linear
Regression Model
GRAY: Thresholding and Linear Regression to Classify
Imbalanced Data.
13
Map feature values to real numbers
Training Phase
Model
Building
Data
Training
Features
Linear
Regression
Model
Application
Model
Validation
Data
Thresholding
Thresholds
GRAY: Thresholding and Linear Regression to Classify
Imbalanced Data.
14
Training Phase
Model
Building
Data
Training
Features
Linear
Regression
Model
Application
Model
Validation
Data
Thresholding
Thresholds
GRAY: Thresholding and Linear Regression to Classify
Imbalanced Data.
15
•  Thresholding process maps real
numbers to priority levels.
Training Phase
Thresholding Process
16
BR1 1.2
BR2 1.4
BR3 3.1
BR4 3.5
BR5 2.1
BR6 3.2
BR7 3.4
BR8 3.7
BR9 1.3
BR10 4.5
BR1 1.2
BR9 1.3
BR2 1.4
BR5 2.1
BR3 3.1
BR6 3.2
BR7 3.4
BR4 3.5
BR8 3.7
BR10 4.5
Sort
BR1 1.2
BR9 1.3
BR2 1.4
BR5 2.1
BR3 3.1
BR6 3.2
BR7 3.4
BR4 3.5
BR8 3.7
BR10 4.5
P1
P2
P3
P4
P5
1.2
1.4
3.4
3.7
Model
Building
Data
Training
Features
Linear
Regression
Model
Application
Testing
Features
Model
Predicted Priority
Validation
Data
Thresholding
Thresholds
GRAY: Thresholding and Linear Regression to Classify
Imbalanced Data.
17
Training Phase
Testing Phase
q  Background
q  Approach
Overall Framework
Features
Classification Module
q  Experiment
Dataset
Research Questions
Results
q Conclusion
Outline
18
q Eclipse Project
§  2001-10-10 to 2007-12-14,
§  178,609 bug reports.
Dataset
DRONE TestingDRONE Training
Model Building Validation
REP-
4.50% 6.89%
85.45%
1.95% 1.21%
P1 P2 P3 P4 P5
19
? Accuracy (Precision, Recall, F-measure)
Compare with SEVERISprio [Menzies & Marcus], SEVERISprio+
? Efficiency (Run time)
? Top features (Fisher score)
Research Questions & Measurements
20
0.00%
20.00%
40.00%
60.00%
80.00%
100.00%
P1 P2 P3 P4 P5
F-measure
DRONE SEVERISprio SEVERISprio+
RQ1: How accurate?
21
29.47%
18.75%
1.  Baselines predict everything
as P3 !
2.  Average F-measure improves
from 18.75% to 29.47%
3.  A relative improvement of
57.17%.
Approach
Run Time (in seconds)
Feature
Extraction(train)
Model
Building
Feature
Extraction(test)
Model
Application
SEVERISprio <0.01 812.18 <0.01 <0.01
SEVERISprio+ <0.01 773.62 <0.01 <0.01
DRONE 0.01 69.25 <0.01 <0.01
RQ2: How efficient?
22
Our approach is much faster in Model Building!
Feature
PRO5
PRO16
REP1
REP3
PRO18
PRO10
PRO21
PRO7
REP5
Text “1663”
RQ3: What are the top-features?
23
Feature
PRO5
PRO16
REP1
REP3
PRO18
PRO10
PRO21
PRO7
REP5
Text “1663”
RQ3: What are the top-features?
6 out of the top-10 features
belong to the product factor
family.
24
Feature
PRO5
PRO16
REP1
REP3
PRO18
PRO10
PRO21
PRO7
REP5
Text “1663”
RQ3: What are the top-features?
3 out of the top-10 features
come from the related-report
factor family.
25
Feature
PRO5
PRO16
REP1
REP3
PRO18
PRO10
PRO21
PRO7
REP5
Text “1663”
RQ3: What are the top-features?
1)  org.eclipse.ui.internal.Wor
kbench.run(Workbech.jav
a:1663)
2)  Appears in 15% P5
reports.
26
Conclusion
yuan.tian.2012@smu.edu.sg
q  Priority prediction is an ordinal +
imbalance classification problem -
>linear regression + thresholding is
one option.
q  DRONE can improve the average F-
measure of baselines from 18.75% to
29.47%, a relative improvement of
57.17%.
q  Product factor features are the most
discriminative features, followed by
related-reports factor features.
Conclusion
yuan.tian.2012@smu.edu.sg
q  Priority prediction is an ordinal +
imbalance classification problem -
>linear regression + thresholding is
one option.
q  DRONE can improve the average F-
measure of baselines from 18.75% to
29.47%, a relative improvement of
57.17%.
q  Product factor features are the most
discriminative features, followed by
related-reports factor features.
Conclusion
yuan.tian.2012@smu.edu.sg
q  Priority prediction is an ordinal +
imbalance classification problem -
>linear regression + thresholding is
one option.
q  DRONE can improve the average F-
measure of baselines from 18.75% to
29.47%, a relative improvement of
57.17%.
q  Product factor features are the most
discriminative features, followed by
related-reports factor features.
30
I acknowledge the support of Google and the
ICSM organizers in the form of a Female
Student Travel Grant, which enabled me to
attend this conference.
Thank you!
Conclusion
yuan.tian.2012@smu.edu.sg
q  Priority prediction is an ordinal +
imbalance classification problem -
>linear regression + thresholding is
one option.
q  DRONE can improve the average F-
measure of baselines from 18.75% to
29.47%, a relative improvement of
57.17%.
q  Product factor features are the most
discriminative features, followed by
related-reports factor features.
APPENDIX
33
P1:10% P2:20% P3:40% P4:20% P5:10%
Proportions of each priority levels in Validation Data:
After applying Linear Regression Model on Validation Data:
BR1 1.2
BR2 1.4
BR3 3.1
BR4 3.5
BR5 2.1
BR6 3.2
BR7 3.4
BR8 3.7
BR9 1.3
BR10 4.5
BR1 1.2
BR9 1.3
BR2 1.4
BR5 2.1
BR3 3.1
BR6 3.2
BR7 3.4
BR4 3.5
BR8 3.7
BR10 4.5
Sort
BR1 1.2
BR9 1.3
BR2 1.4
BR5 2.1
BR3 3.1
BR6 3.2
BR7 3.4
BR4 3.5
BR8 3.7
BR10 4.5
Initial
P1
P2
P3
P4
P5
1.2
1.4
3.4
3.7
Predicted priority level
34
BR1 1.2
BR9 1.3
BR2 1.4
BR5 2.1
BR3 3.1
BR6 3.2
BR7 3.4
BR4 3.5
BR8 3.7
BR10 4.5
P1
P2
P3
P4
P5
1.2
1.4
3.4
3.7
Initialized Thresholds:
BR1 1.2
BR9 1.3
BR2 1.4
BR5 2.1
BR3 3.1
BR6 3.2
BR7 3.4
BR4 3.5
BR8 3.7
BR10 4.5
P3
P4
P5
Tune one
threshold
Compute
F-measures
1.2
1.4
3.4
3.7
1.1
1.3
Compute
F-measures
35
36
BR1 1.2
BR9 1.3
BR2 1.4
BR5 2.1
BR3 3.1
BR6 3.2
BR7 3.4
BR4 3.5
BR8 3.7
BR10 4.5
1.2
1.4
3.4
3.7
Tune one
threshold
1.1
1.3
Compute
F-measures
BR1 1.2
BR9 1.3
BR2 1.4
BR5 2.1
BR3 3.1
BR6 3.2
BR7 3.4
BR4 3.5
BR8 3.7
BR10 4.5
P3
P4
P5
1.4
3.4
3.7
1.3
Update
threshold value
P2
P1
Higher
BR1 1.2
BR9 1.3
BR2 1.4
BR5 2.1
BR3 3.1
BR6 3.2
BR7 3.4
BR4 3.5
BR8 3.7
BR10 4.5
1.4
3.4
3.7
1.3
Threshold 1 is fixed
BR1 1.2
BR9 1.3
BR2 1.4
BR5 2.1
BR3 3.1
BR6 3.2
BR7 3.4
BR4 3.5
BR8 3.7
BR10 4.5
Tune for next threshold
P3
P4
P5
1.4
3.4
3.7
1.3
P2
P1
37
q  Menzies and Marcus (ICSM 2008)
¡  Analyze reports in NASA
¡  Textual features +feature selection+ RIPPER
q  Lamkanfi et al. (MSR 2010, CSMR 2011)
¡  Predict coarse-grained severity labels
¡  Severe vs. non-severe
¡  Analyze reports in open-source systems
¡  Compare and contrast various algorithms
q  Tian et al.(WCRE 2012)
¡  Information retrieval + k nearest neighbour
Previous Research Work: Severity Prediciton
38
q Tokenization
Spliting document into tokens according to delimiters.
q Stop-word Removal
eg: are, is, I, he
q Stemming
eg: woking, works, worked->work
Text Pre-processing
39
40
q Textual Features
§  Compute BM25Fext scores
§  Feature1: Extract unigram
§  Feature2: Extract bigrams
q Non-Textual Features
§  Feature3: Product field
§  Feature4: Component Field
Appendix: Similarity Between Bug Reports (REP-)
Note: Weights are
learned from
duplicate bug reports.
Feature
PRO5 Proportion of bug reports made for the same product as that of BR prior to the
reporting of BR that are assigned priority P1.
PRO16 Proportion of bug reports made for the same component as that of BR prior to
the reporting of BR that are assigned priority P1.
REP1 Mean priority of the top-20 most similar bug reports to BR as measured using
REP- prior to the reporting of BR.
REP3 Mean priority of the top-10 most similar bug reports to BR as measured using
REP prior to the reporting of BR.
PRO18 Proportion of bug reports made for the same component as that of BR prior to
the reporting of BR that are assigned priority P3.
PRO10 Mean priority of bug reports made for the same product as that of BR prior to
the reporting of BR.
PRO21 Mean priority of bug reports made for the same component as that of BR prior
to the reporting of BR.
PRO7 Proportion of bug reports made for the same product as that of BR prior to the
reporting of BR that are assigned priority P3.
REP5 Mean priority of the top-5 most similar bug reports to BR as measured using REP
prior to the reporting of BR.
Text “1663”
41

More Related Content

PPTX
Partitioned Based Regression Verification
PDF
Recent Research in Search Based Software Testing
PDF
QUALITY METRICS OF TEST SUITES IN TESTDRIVEN DESIGNED APPLICATIONS
PDF
Prioritizing Test Cases for Regression Testing A Model Based Approach
PDF
Configuration Navigation Analysis Model for Regression Test Case Prioritization
PDF
Diversity Maximization Speedup for Fault Localization
PPTX
[2012] Empirical Evaluation on FBD Model-Based Test Coverage Criteria using M...
DOC
SOFTWARE QUALITY ASSURANCE AND TESTING - SHORT NOTES
Partitioned Based Regression Verification
Recent Research in Search Based Software Testing
QUALITY METRICS OF TEST SUITES IN TESTDRIVEN DESIGNED APPLICATIONS
Prioritizing Test Cases for Regression Testing A Model Based Approach
Configuration Navigation Analysis Model for Regression Test Case Prioritization
Diversity Maximization Speedup for Fault Localization
[2012] Empirical Evaluation on FBD Model-Based Test Coverage Criteria using M...
SOFTWARE QUALITY ASSURANCE AND TESTING - SHORT NOTES

What's hot (13)

DOCX
DSS_Resume_AF_1-19-26
PDF
Interactive Requirements Prioritization Using Search Based Optimization Techn...
PDF
20050314 specification based regression test selection with risk analysis
PPTX
PDF
Research Activities: past, present, and future.
PDF
Improving Code Review Effectiveness Through Reviewer Recommendations
PPT
Software Testing Techniques
PDF
Cross-project Defect Prediction Using A Connectivity-based Unsupervised Class...
PDF
Quality management models
PPTX
Feature Selection Techniques for Software Fault Prediction (Summary)
PDF
Bd36334337
PPTX
Sta unit 2(abimanyu)
PPT
Software metrics
DSS_Resume_AF_1-19-26
Interactive Requirements Prioritization Using Search Based Optimization Techn...
20050314 specification based regression test selection with risk analysis
Research Activities: past, present, and future.
Improving Code Review Effectiveness Through Reviewer Recommendations
Software Testing Techniques
Cross-project Defect Prediction Using A Connectivity-based Unsupervised Class...
Quality management models
Feature Selection Techniques for Software Fault Prediction (Summary)
Bd36334337
Sta unit 2(abimanyu)
Software metrics
Ad

Similar to DRONE: Predicting Priority of Reported Bugs by Multi-Factor Analysis (20)

PDF
Recommender Systems from A to Z – Model Evaluation
PDF
Bug Triage: An Automated Process
PDF
An Approach to Software Testing of Machine Learning Applications
PDF
Opinion mining framework using proposed RB-bayes model for text classication
PDF
A Survey on Bug Tracking System for Effective Bug Clearance
DOCX
Nature-Based Prediction Model of Bug Reports Based on Ensemble Machine Learni...
PDF
Good Hunting: Locating, Prioritizing, and Fixing Bugs Automatically (Keynote,...
PPTX
Reasesrty djhjan S - explanation required.pptx
PDF
IRJET-Automatic Bug Triage with Software
PDF
Noorbehbahani classification evaluation measure
PDF
PPT
Software-Praktikum SoSe 2005 Lehrstuhl fuer Maschinelles ...
PDF
IRJET- Data Reduction in Bug Triage using Supervised Machine Learning
PDF
Introduction to behavior based recommendation system
PPTX
An Exploration of Challenges Limiting Pragmatic Software Defect Prediction
PDF
Modern Recommendation for Advanced Practitioners
PDF
Continuous Evaluation of Collaborative Recommender Systems in Data Stream Man...
PDF
Lect22-Efficient test suite mgt - II.pptx.pdf
PDF
It's Not a Bug, It's a Feature — How Misclassification Impacts Bug Prediction
Recommender Systems from A to Z – Model Evaluation
Bug Triage: An Automated Process
An Approach to Software Testing of Machine Learning Applications
Opinion mining framework using proposed RB-bayes model for text classication
A Survey on Bug Tracking System for Effective Bug Clearance
Nature-Based Prediction Model of Bug Reports Based on Ensemble Machine Learni...
Good Hunting: Locating, Prioritizing, and Fixing Bugs Automatically (Keynote,...
Reasesrty djhjan S - explanation required.pptx
IRJET-Automatic Bug Triage with Software
Noorbehbahani classification evaluation measure
Software-Praktikum SoSe 2005 Lehrstuhl fuer Maschinelles ...
IRJET- Data Reduction in Bug Triage using Supervised Machine Learning
Introduction to behavior based recommendation system
An Exploration of Challenges Limiting Pragmatic Software Defect Prediction
Modern Recommendation for Advanced Practitioners
Continuous Evaluation of Collaborative Recommender Systems in Data Stream Man...
Lect22-Efficient test suite mgt - II.pptx.pdf
It's Not a Bug, It's a Feature — How Misclassification Impacts Bug Prediction
Ad

Recently uploaded (20)

PPTX
Group 1 Presentation -Planning and Decision Making .pptx
PDF
Agricultural_Statistics_at_a_Glance_2022_0.pdf
PPTX
A Presentation on Artificial Intelligence
PDF
Machine learning based COVID-19 study performance prediction
PPT
Teaching material agriculture food technology
PDF
cuic standard and advanced reporting.pdf
PDF
MIND Revenue Release Quarter 2 2025 Press Release
PPTX
Digital-Transformation-Roadmap-for-Companies.pptx
PDF
7 ChatGPT Prompts to Help You Define Your Ideal Customer Profile.pdf
PDF
Diabetes mellitus diagnosis method based random forest with bat algorithm
PDF
Accuracy of neural networks in brain wave diagnosis of schizophrenia
PDF
Per capita expenditure prediction using model stacking based on satellite ima...
PPTX
1. Introduction to Computer Programming.pptx
PPTX
KOM of Painting work and Equipment Insulation REV00 update 25-dec.pptx
PDF
Profit Center Accounting in SAP S/4HANA, S4F28 Col11
PDF
The Rise and Fall of 3GPP – Time for a Sabbatical?
PDF
Building Integrated photovoltaic BIPV_UPV.pdf
PPT
“AI and Expert System Decision Support & Business Intelligence Systems”
PDF
Assigned Numbers - 2025 - Bluetooth® Document
PDF
Advanced methodologies resolving dimensionality complications for autism neur...
Group 1 Presentation -Planning and Decision Making .pptx
Agricultural_Statistics_at_a_Glance_2022_0.pdf
A Presentation on Artificial Intelligence
Machine learning based COVID-19 study performance prediction
Teaching material agriculture food technology
cuic standard and advanced reporting.pdf
MIND Revenue Release Quarter 2 2025 Press Release
Digital-Transformation-Roadmap-for-Companies.pptx
7 ChatGPT Prompts to Help You Define Your Ideal Customer Profile.pdf
Diabetes mellitus diagnosis method based random forest with bat algorithm
Accuracy of neural networks in brain wave diagnosis of schizophrenia
Per capita expenditure prediction using model stacking based on satellite ima...
1. Introduction to Computer Programming.pptx
KOM of Painting work and Equipment Insulation REV00 update 25-dec.pptx
Profit Center Accounting in SAP S/4HANA, S4F28 Col11
The Rise and Fall of 3GPP – Time for a Sabbatical?
Building Integrated photovoltaic BIPV_UPV.pdf
“AI and Expert System Decision Support & Business Intelligence Systems”
Assigned Numbers - 2025 - Bluetooth® Document
Advanced methodologies resolving dimensionality complications for autism neur...

DRONE: Predicting Priority of Reported Bugs by Multi-Factor Analysis

  • 1. DRONE: Predicting Priority of Reported Bugs by Multi-Factor Analysis Yuan Tian1, David Lo1, Chengnian Sun2 1Singapore Management University 2National University of Singapore
  • 2. Bug tracking systems allow developers to prioritize which bugs are to be fixed first ¡  Manual process ¡  Depend on other bugs ¡  Time consuming What is priority and when it is assigned? New Assigned 300 reports to triage daily! Validity Check, Duplicate Check, Priority Assignment Developer Assignment Bug Triager 2
  • 3. Priority Vs Severity 3 “Severity is assigned by customers [users] while priority is provided by developers . . . customer [user] reported severity does impact the developer when they assign a priority level to a bug report, but it’s not the only consideration. For example, it may be a critical issue for a particular reporter that a bug is fixed but it still may not be the right thing for the eclipse team to fix.” Eclipse PMC Member Importance: P5 (lowest priority level), major (high severity)
  • 4. q  Background q  Approach Overall Framework Features Classification Module q  Experiment Dataset Research Questions Results q Conclusion Outline 4
  • 5. q  Background q  Approach Overall Framework Features Classification Module q  Experiment Dataset Research Questions Results q Conclusion Outline 5
  • 6. (1) summary (2) description (3) product (4) component (5) author (6) severity (7) priority. 1 2 3 4 5 6 Time-related Info. 7 6 Bug Report
  • 7. q  Background q  Approach Overall Framework Features Classification Module q  Experiment Dataset Research Questions Results q Conclusion Outline 7
  • 8. Training Reports Related Reports Model Predicted Priority Testing Reports Temporal Textual Author Severity Product Model Builder Model Application Feature Extraction Module Classifier Module Training Phase Testing Phase Overall Framework 8
  • 9. Temporal Factor TMP1 Number of bugs reported within 7 days before the reporting of BR. TMP2 Number of bugs reported with the same severity within 7 days before the reporting of BR. TMP3 Number of bugs reported with the same or higher severity within 7 days before the reporting of BR. TMP4-6 The same as TMP1-3 except the time duration is 1 day. TMP7-9 The same as TMP1-3 except the time duration is 3 days. TMP10-12 The same as TMP1-3 except the time duration is 30 days. Textual Factor TXT1-n Stemmed words from the description field of BR excluding stop words. Severity Factor SEV BR’s severity field. 9
  • 10. Author Factor AUT1 Mean priority of all bugs reports made by the author of BR prior to the reporting of BR. AUT2 Median priority of all bugs reports made by the author of BR prior to the reporting of BR. AUT3 The number of bug reports made by the author of BR prior to the reporting of BR. Related Reports Factor [REP-, Sun et al.] REP1 Mean priority of the top-20 most similar bug reports to BR as measured using REP- prior to the reporting of BR. REP2 Median priority of the top-20 most similar bug reports to BR as measured using REP prior to the reporting of BR. REP3-4 The same as REP1-2 except only the top 10 bug reports are considered. REP5-6 The same as REP1-2 except only the top 5 bug reports are considered. REP7-8 The same as REP1-2 except only the top 3 bug reports are considered. REP9-10 The same as REP1-2 except only the top 1 bug reports are considered. 10
  • 11. Product Factor PRO1 BR’s product field Note: categorical feature PRO2 Number of bug reports made for the same product as that of BR prior to the reporting of BR. PRO3 Number of bug reports made for the same product of the same severity as that of BR prior to the reporting of BR. PRO4 Number of bug reports made for the same product of the same or higher severity as those of BR prior to the reporting of BR. PRO5 Proportion of bug reports made for the same product as that of BR prior to the reporting of BR that are assigned priority P1. PRO6-9 The same as PRO5 except they are for priority P2-P5 respectively. PRO10 Mean priority of bug reports made for the same product as that of BR prior to the reporting of BR. PRO11 Median priority of bug reports mad for the same product as that of BR prior to the reporting of BR. PRO12-22 The same as PRO1-11 except they are for the component field of BR. 11
  • 12. Training Reports Related Reports Model Predicted Priority Testing Reports Temporal Textual Author Severity Product Model Builder Model Application Feature Extraction Module Classifier Module Training Phase Testing Phase Overall Framework 12
  • 13. Model Building Data Training Features Linear Regression Model GRAY: Thresholding and Linear Regression to Classify Imbalanced Data. 13 Map feature values to real numbers Training Phase
  • 15. Model Building Data Training Features Linear Regression Model Application Model Validation Data Thresholding Thresholds GRAY: Thresholding and Linear Regression to Classify Imbalanced Data. 15 •  Thresholding process maps real numbers to priority levels. Training Phase
  • 16. Thresholding Process 16 BR1 1.2 BR2 1.4 BR3 3.1 BR4 3.5 BR5 2.1 BR6 3.2 BR7 3.4 BR8 3.7 BR9 1.3 BR10 4.5 BR1 1.2 BR9 1.3 BR2 1.4 BR5 2.1 BR3 3.1 BR6 3.2 BR7 3.4 BR4 3.5 BR8 3.7 BR10 4.5 Sort BR1 1.2 BR9 1.3 BR2 1.4 BR5 2.1 BR3 3.1 BR6 3.2 BR7 3.4 BR4 3.5 BR8 3.7 BR10 4.5 P1 P2 P3 P4 P5 1.2 1.4 3.4 3.7
  • 18. q  Background q  Approach Overall Framework Features Classification Module q  Experiment Dataset Research Questions Results q Conclusion Outline 18
  • 19. q Eclipse Project §  2001-10-10 to 2007-12-14, §  178,609 bug reports. Dataset DRONE TestingDRONE Training Model Building Validation REP- 4.50% 6.89% 85.45% 1.95% 1.21% P1 P2 P3 P4 P5 19
  • 20. ? Accuracy (Precision, Recall, F-measure) Compare with SEVERISprio [Menzies & Marcus], SEVERISprio+ ? Efficiency (Run time) ? Top features (Fisher score) Research Questions & Measurements 20
  • 21. 0.00% 20.00% 40.00% 60.00% 80.00% 100.00% P1 P2 P3 P4 P5 F-measure DRONE SEVERISprio SEVERISprio+ RQ1: How accurate? 21 29.47% 18.75% 1.  Baselines predict everything as P3 ! 2.  Average F-measure improves from 18.75% to 29.47% 3.  A relative improvement of 57.17%.
  • 22. Approach Run Time (in seconds) Feature Extraction(train) Model Building Feature Extraction(test) Model Application SEVERISprio <0.01 812.18 <0.01 <0.01 SEVERISprio+ <0.01 773.62 <0.01 <0.01 DRONE 0.01 69.25 <0.01 <0.01 RQ2: How efficient? 22 Our approach is much faster in Model Building!
  • 24. Feature PRO5 PRO16 REP1 REP3 PRO18 PRO10 PRO21 PRO7 REP5 Text “1663” RQ3: What are the top-features? 6 out of the top-10 features belong to the product factor family. 24
  • 25. Feature PRO5 PRO16 REP1 REP3 PRO18 PRO10 PRO21 PRO7 REP5 Text “1663” RQ3: What are the top-features? 3 out of the top-10 features come from the related-report factor family. 25
  • 26. Feature PRO5 PRO16 REP1 REP3 PRO18 PRO10 PRO21 PRO7 REP5 Text “1663” RQ3: What are the top-features? 1)  org.eclipse.ui.internal.Wor kbench.run(Workbech.jav a:1663) 2)  Appears in 15% P5 reports. 26
  • 27. Conclusion yuan.tian.2012@smu.edu.sg q  Priority prediction is an ordinal + imbalance classification problem - >linear regression + thresholding is one option. q  DRONE can improve the average F- measure of baselines from 18.75% to 29.47%, a relative improvement of 57.17%. q  Product factor features are the most discriminative features, followed by related-reports factor features.
  • 28. Conclusion yuan.tian.2012@smu.edu.sg q  Priority prediction is an ordinal + imbalance classification problem - >linear regression + thresholding is one option. q  DRONE can improve the average F- measure of baselines from 18.75% to 29.47%, a relative improvement of 57.17%. q  Product factor features are the most discriminative features, followed by related-reports factor features.
  • 29. Conclusion yuan.tian.2012@smu.edu.sg q  Priority prediction is an ordinal + imbalance classification problem - >linear regression + thresholding is one option. q  DRONE can improve the average F- measure of baselines from 18.75% to 29.47%, a relative improvement of 57.17%. q  Product factor features are the most discriminative features, followed by related-reports factor features.
  • 30. 30 I acknowledge the support of Google and the ICSM organizers in the form of a Female Student Travel Grant, which enabled me to attend this conference. Thank you!
  • 31. Conclusion yuan.tian.2012@smu.edu.sg q  Priority prediction is an ordinal + imbalance classification problem - >linear regression + thresholding is one option. q  DRONE can improve the average F- measure of baselines from 18.75% to 29.47%, a relative improvement of 57.17%. q  Product factor features are the most discriminative features, followed by related-reports factor features.
  • 33. 33
  • 34. P1:10% P2:20% P3:40% P4:20% P5:10% Proportions of each priority levels in Validation Data: After applying Linear Regression Model on Validation Data: BR1 1.2 BR2 1.4 BR3 3.1 BR4 3.5 BR5 2.1 BR6 3.2 BR7 3.4 BR8 3.7 BR9 1.3 BR10 4.5 BR1 1.2 BR9 1.3 BR2 1.4 BR5 2.1 BR3 3.1 BR6 3.2 BR7 3.4 BR4 3.5 BR8 3.7 BR10 4.5 Sort BR1 1.2 BR9 1.3 BR2 1.4 BR5 2.1 BR3 3.1 BR6 3.2 BR7 3.4 BR4 3.5 BR8 3.7 BR10 4.5 Initial P1 P2 P3 P4 P5 1.2 1.4 3.4 3.7 Predicted priority level 34
  • 35. BR1 1.2 BR9 1.3 BR2 1.4 BR5 2.1 BR3 3.1 BR6 3.2 BR7 3.4 BR4 3.5 BR8 3.7 BR10 4.5 P1 P2 P3 P4 P5 1.2 1.4 3.4 3.7 Initialized Thresholds: BR1 1.2 BR9 1.3 BR2 1.4 BR5 2.1 BR3 3.1 BR6 3.2 BR7 3.4 BR4 3.5 BR8 3.7 BR10 4.5 P3 P4 P5 Tune one threshold Compute F-measures 1.2 1.4 3.4 3.7 1.1 1.3 Compute F-measures 35
  • 36. 36 BR1 1.2 BR9 1.3 BR2 1.4 BR5 2.1 BR3 3.1 BR6 3.2 BR7 3.4 BR4 3.5 BR8 3.7 BR10 4.5 1.2 1.4 3.4 3.7 Tune one threshold 1.1 1.3 Compute F-measures BR1 1.2 BR9 1.3 BR2 1.4 BR5 2.1 BR3 3.1 BR6 3.2 BR7 3.4 BR4 3.5 BR8 3.7 BR10 4.5 P3 P4 P5 1.4 3.4 3.7 1.3 Update threshold value P2 P1 Higher
  • 37. BR1 1.2 BR9 1.3 BR2 1.4 BR5 2.1 BR3 3.1 BR6 3.2 BR7 3.4 BR4 3.5 BR8 3.7 BR10 4.5 1.4 3.4 3.7 1.3 Threshold 1 is fixed BR1 1.2 BR9 1.3 BR2 1.4 BR5 2.1 BR3 3.1 BR6 3.2 BR7 3.4 BR4 3.5 BR8 3.7 BR10 4.5 Tune for next threshold P3 P4 P5 1.4 3.4 3.7 1.3 P2 P1 37
  • 38. q  Menzies and Marcus (ICSM 2008) ¡  Analyze reports in NASA ¡  Textual features +feature selection+ RIPPER q  Lamkanfi et al. (MSR 2010, CSMR 2011) ¡  Predict coarse-grained severity labels ¡  Severe vs. non-severe ¡  Analyze reports in open-source systems ¡  Compare and contrast various algorithms q  Tian et al.(WCRE 2012) ¡  Information retrieval + k nearest neighbour Previous Research Work: Severity Prediciton 38
  • 39. q Tokenization Spliting document into tokens according to delimiters. q Stop-word Removal eg: are, is, I, he q Stemming eg: woking, works, worked->work Text Pre-processing 39
  • 40. 40 q Textual Features §  Compute BM25Fext scores §  Feature1: Extract unigram §  Feature2: Extract bigrams q Non-Textual Features §  Feature3: Product field §  Feature4: Component Field Appendix: Similarity Between Bug Reports (REP-) Note: Weights are learned from duplicate bug reports.
  • 41. Feature PRO5 Proportion of bug reports made for the same product as that of BR prior to the reporting of BR that are assigned priority P1. PRO16 Proportion of bug reports made for the same component as that of BR prior to the reporting of BR that are assigned priority P1. REP1 Mean priority of the top-20 most similar bug reports to BR as measured using REP- prior to the reporting of BR. REP3 Mean priority of the top-10 most similar bug reports to BR as measured using REP prior to the reporting of BR. PRO18 Proportion of bug reports made for the same component as that of BR prior to the reporting of BR that are assigned priority P3. PRO10 Mean priority of bug reports made for the same product as that of BR prior to the reporting of BR. PRO21 Mean priority of bug reports made for the same component as that of BR prior to the reporting of BR. PRO7 Proportion of bug reports made for the same product as that of BR prior to the reporting of BR that are assigned priority P3. REP5 Mean priority of the top-5 most similar bug reports to BR as measured using REP prior to the reporting of BR. Text “1663” 41