SlideShare a Scribd company logo
The “Bellwether” Effect
Rahul Krishna (rkrish11@ncsu.edu)
Tim Menzies, and Wei Fu
And Its Implications to Transfer Learning
1
2WeTOSM ‘14
[Turhan09] Data from
Turkish toasters can
predict defects in
NASA flight systems
Today’s topic:
Transfer Learning
3
Today’s topic:
Simpler Transfer Learning with
“Bell…. what?”
Definitions
Bellwether effect
4
• If a community builds
many software projects
• There exists one ∈ many
from which
• quality predictors can
be built …
• … and used for all
Bellwether method
• find the one
• use it
Definitions
5
• find the one
• use it
Note: vastly simpler than other transfer learning
methods [Turhan09, Turhan11, Nam13, etc]
Bellwether effect Bellwether method
• If a community builds
many software projects
• There exists one ∈ many
from which
• quality predictors can
be built …
• … and used for all
Outline
● Motivation
● Background
○ Evaluating Quality
○ Transfer Learning
○ The “Bellwether”
● Experimental Setup
○ Benchmark Data
○ Prediction Model
○ Statistical Measures
● Results
● Conclusions 6
The “Cold-Start” Problem
Past Projects Prediction Model Upcoming
releases
7
The “Cold-Start” Problem
Past Projects Prediction Model
?
8
Upcoming
releases
Challenges:
Variable Datasets
... “New projects are always emerging,
and old ones are being rewritten…”
… “the quality, representativeness,
and volume of the training data have a
major influence on the usefulness
and stability of model performance…”
— Rahman et al.
[Rah12]
Growing Volume
Of Projects
9
• Unstable conclusions are typical in SE [Menzies12]
• Usefulness of some lesson “X” is contradictory
Challenges:
Conclusion Instability
10
• Unstable conclusions are typical in SE [Menzies12]
• Usefulness of some lesson “X” is contradictory
Challenges:
Conclusion Instability
11
Kitchenham et al. ‘07
• Are data from other
organizations …
• … as useful as local
data?
• Inconclusive
• 3 cases: Just as good.
4 cases: Worse.
• Unstable conclusions are typical in SE [Menzies12]
• Usefulness of some lesson “X” is contradictory
Challenges:
Conclusion Instability
12
Zimmermann et al. ‘09
• 622 pairs of projects
• Only 4% of pairs
were useful
Kitchenham et al. ‘07
• Are data from other
organizations …
• … as useful as local
data?
• Inconclusive
• 3 cases: Just as good.
4 cases: Worse.
• Menzies et al. [Men12] offer several ways
• They ask for better experimental practice.
• Is there a better way?
•Yes! Look for the “Bellwether”
• As long as the bellwether continues to offer good
quality predictions
•Then conclusions from one…
•... are conclusions for all
13
How to Reduce this Instability?
Outline
● Motivation
● Background
○ Evaluating Quality
○ Transfer Learning
○ The “Bellwether”
● Experimental Setup
○ Benchmark Data
○ Prediction Model
○ Statistical Measures
● Results
● Conclusions 14
Estimating Quality
Why not Static Analyzers?
• [Rahman14] et al. compared
• Code analysis tools:
FindBugs, JLint, and PMD
• with Static Code defect
Predictors
• Found no difference
(measurement: AUCEC)
15
• And
• Using lightweight parsers...
• … Defect predictors can
quickly jump to new
languages
• Same is not true for static
code analysis tools
• Lesser Bugs Better Software
Estimating Quality
Why not Static Analyzers?
16
• And
• They work surprisingly well!
• [Ostrand04]: ~80% of the bugs localized
in 20% of the code
Estimating Quality:
Static code Defect Prediction
1. Ubiquitous
• Researchers and Industrial practitioners frequently use
them. Eg. Companies like Google [Lew14], V&V books
[Raktin01]
2. A lot of (ongoing) research
• Tremendous Attention [Nam13]
• Better approaches are constantly being proposed
3. They are easy to use
• Software Metrics can be collected fast
• Wide variety of tools, open source data miners
[sklearn][weka]
17
Outline
● Motivation
● Background
○ Evaluating Quality
○ Transfer Learning
○ The “Bellwether”
● Experimental Setup
○ Benchmark Data
○ Prediction Model
○ Statistical Measures
● Results
● Conclusions 18
Transfer Learning:
Introduction
• Extract knowledge from source (S) and apply to
target (T)
• Data needs to be massaged before use[Zhang15]
• Careful sub-sampling
• Transformation
• Based on data source, TL is categorized as:
• Homogeneous vs. Heterogeneous
• Based on transformation[Nam13, Nam15, Jing15]
• Similarity vs. Dimensionality
19
Transfer Learning:
Categories
Homogeneous
• Source (S) and Target
(T) are quantified using
the same attributes
Heterogeneous
• Source (S) and Target
(T) are quantified using
different attributes
Similarity
• Learn from subsampled
rows/columns of the
source (S)
Dimensionality
• Manipulate
rows/columns of
source (S) to match
target (T)
20
Heterogeneous
• Source (S) and Target
(T) are quantified using
different attributes
Dimensionality
• Manipulate
rows/columns of
source (S) to match
target (T)
Transfer Learning:
Categories
Homogeneous
• Source (S) and Target
(T) are quantified using
the same attributes
Similarity
• Learn from subsampled
rows/columns of the
source (S)
This Talk
21
Homogeneous TL:
Burak Filter
22
• Burak[Tur09] used relevancy filtering
• Filter using kNN
• Gather two sets of data
• Validation set (S) Test Data
• Candidate set (T) Train Data
• Use kNN
• Pick “similar” instances from T
• Filter T using S
Homogeneous TL:
Burak Filter
• First study on relevancy
• Their conclusion:
23
… The performances of defect predictors based on the
NN-filtered data do not give necessary empirical
evidence to make a strong conclusion …
… Sometimes NN data based models may perform
better than WC data based models …
Homogeneous TL:
Mixed Model Learner
• Turhan et al.[Tur11] proposed a mixed-model learner
• Combine local data with curated non-local data
• Gather two sets of data
• Validation set (S): Pick a random 10% of local data
• Candidate set (T): Remaining 90% and non-local data
• For non-local data, they use Burak filter[Tur09]
• Experiment with various 90%-10% splits
• 400 experiments were conducted to pick the best model
24
Homogeneous TL:
Mixed Model Learner
• Extension to Burak Filter
• Incorporated local data
Challenges
• Similar issues as Burak Filter
• Biased; Unstable model.
• The authors report:
… mixed project models offer only limited improvements
i.e., 3 out 10 projects
— Turhan
‘11
25
Homogeneous TL:
Addressing the challenges
• Researchers have offered a bleak view of TL
• Zimmerman et al.[Zimm09]
•Transfer is not always consistent
•IE could learn from Firefox but not vice versa
•Rahman et al.[Rahman12]
•The “imprecision” of learning across projects
• Recent research has resorted to more complex
approaches
26
More Transfer Learners …
27 WeTOSM ‘14
Outline
● Motivation
● Background
○ Evaluating Quality
○ Transfer Learning
○ The “Bellwether”
● Experimental Setup
○ Benchmark Data
○ Prediction Model
○ Statistical Measures
● Results
● Conclusions 28
Is this complexity necessary?
• Short answer — No
• Just look for the “Bellwether”
•Use our bellwether method
•Build your model
•Et voilà!
29
The Bellwether Method
Generate
Apply Monitor
#
The Bellwether Method
Generate
• Project Pairs Pi , j
• Perform a Leave-one-out Test
Train on Pi Test on Pj
• Pick the Project with the
best model
Apply Monitor
#
The Bellwether Method
Generate
Apply
• Predict Quality
on future
projects
Monitor
#
The Bellwether Method
Generate
Apply
Monitor
• When
predictions
fail. Restart.
#
The Bellwether Method
Generate
Apply Monitor
#
Outline
● Motivation
● Background
○ Evaluating Quality
○ Transfer Learning
○ The “Bellwether”
● Experimental Setup
○ Benchmark Data
○ Prediction Model
○ Statistical Measures
● Results
● Conclusions 35
Experiment Setup:
Benchmark Data
• 120 Datasets from 4 communities
• Defects in 3 levels of granularity
• File, Class, and Function
• Open source and Proprietary
36
Experiment Setup:
Benchmark Data
• BTW, Apache has local data
• Multiple versions
• Temporally ordered
37
A total of
54 datasets
Outline
● Motivation
● Background
○ Evaluating Quality
○ Transfer Learning
○ The “Bellwether”
● Experimental Setup
○ Benchmark Data
○ Prediction Model
○ Statistical Measures
● Results
● Conclusions 38
Experiment Setup:
Prediction Model
• We use Random Forests[Zimmerman08]
• Build several decision trees from random subsamples
• Use ensemble learning
• Samples are imbalanced[Pelayo07]
• More “clean” examples
• Use SMOTE [Chawla01] to rebalance data*
• Randomly down sample “clean” instances
• Up-sample “buggy” instances
*Apply only to training data
38
Outline
● Motivation
● Background
○ Evaluating Quality
○ Transfer Learning
○ The “Bellwether”
● Experimental Setup
○ Benchmark Data
○ Prediction Model
○ Statistical Measures
● Results
● Conclusions 40
Experiment Setup:
Statistical Measures
41
• Prediction is usually measured using ROC
• ROC is a plot of Recall vs. False Alarm
• Plot requires several treatments
• Obtained by cross validation.
• We refrain from Cross-Validation
• It tends to mix the test data with the bellwether
• Instead,
• We use Balance [Ma07]
Experiment Setup:
Statistical Measures
42
• Instead of a set of points for ROC,
• Produce one point.
• X, Y = Pd (Recall), Pf (False Alarm)
• Balance is the weighted distance from the ideal
point
• Ideal Point => (Pd, Pf) = (1, 0)
• Balance =
• Lower the Balance, better the performance
Experiment Setup:
Statistical Measures
• Prediction Model is inherently random
• Rerun model 40 times with different seeds
• Collect Balance measure in every run
• Use Scott-Knott Test to compare Balance values
• Scott-Knott ranks Balance values (best to worst)
• Rank -> Effect Size Test + Hypothesis Test
• Why SK?
•It’s been used by recent high profile papers at TSE
[Mittas13] and ICSE [Ghotra15]
43
Outline
● Motivation
● Background
○ Evaluating Quality
○ Transfer Learning
○ The “Bellwether”
● Experimental Setup
○ Benchmark Data
○ Prediction Model
○ Statistical Measures
● Results
● Conclusions 44
How rare are “Bellwethers”?
How does the bellwether fare against local models?
Is Bellwether better than other transfer learning methods?
Can we predict which data set will be bellwether?
How much of the “Bellwether” data is required?
Results:
Research Questions
45
How rare are “Bellwethers”?
How does the bellwether fare against local models?
Is Bellwether better than other transfer learning methods?
Can we predict which data set will be bellwether?
How much of the “Bellwether” data is required?
Results:
Research Question 1
46
Results:
Research Question 1
47
Research Answer
Our results suggest bellwethers are not rare.
How rare are “Bellwethers”?
How rare are “Bellwethers”?
Community:
Bellwether: Lucene
Apache
Results:
Research Question 1
48
How rare are “Bellwethers”?
Community:
Bellwether: MC
NASA
Results:
Research Question 1
49
How rare are “Bellwethers”?
Community:
Bellwether: LC
AEEEM
Results:
Research Question 1
50
How rare are “Bellwethers”?
Community:
Bellwether: Safe
ReLink
X===
Results:
Research Question 1
51
How rare are “Bellwethers”?
How does the bellwether fare against local models?
Is Bellwether better than other transfer learning methods?
Can we predict which data set will be bellwether?
How much of the “Bellwether” data is required?
Results:
Research Question 2
52
How does the bellwether fare against local models?
Research Answer
For projects measured with the
same quality metrics, training
models with bellwether is just
as good as — if not better than
— local models
Results:
Research Question 2
53
How rare are “Bellwethers”?
How does the bellwether fare against local models?
Is Bellwether better than other transfer learning methods?
Can we predict which data set will be bellwether?
How much of the “Bellwether” data is required?
Results:
Research Question 3
54
Is Bellwether better than other transfer learning methods?
Research Answer
The bellwether outperforms standard homogeneous transfer learners.
Results:
Research Question 3
55
How rare are “Bellwethers”?
How does the bellwether fare against local models?
Is Bellwether better than other transfer learning methods?
Can we predict which data set will be bellwether?
How much of the “Bellwether” data is required?
Results:
Research Question 4
56
Can we predict which data set will be bellwether?
Research Answer
This is non-trivial. Trying to statistically determine if a project will be a
bellwether was unsuccessful. This is open to further examination.
Results:
Research Question 4
57
How rare are “Bellwethers”?
How does the bellwether fare against local models?
Is Bellwether better than other transfer learning methods?
Can we predict which data set will be bellwether?
How much of the “Bellwether” data is required?
Results:
Research Question 5
58
How much data is required before detecting the “Bellwether”?
Research Answer
A few dozen defective samples from the bellwether is sufficient to build a
reliable model
Results:
Research Question 5
59
Outline
● Motivation
● Background
○ Evaluating Quality
○ Transfer Learning
○ The “Bellwether”
● Experimental Setup
○ Benchmark Data
○ Prediction Model
○ Statistical Measures
● Results
● Conclusions 60
Practical Implications
• The problem of generality in SE
• Reproducibility is hard to achieve.
• With Bellwethers Transfer Learners can
• Not only be reproducible
• But also be stable
• and Reliable
• Identification of Bellwether earlier
• Would have changed course of research
• More focus on coarse grain analysis
• Less on relevancy filtering, model generation
61
Future Work
• Bellwethers in heterogeneous learners
• Promising heterogeneous transfer learners [Nam15][Jing15]
• Perform complex dimensionality mapping transforms
• Can Bellwethers assist in finding the best mapping?
• Study and quantify bellwether
• what makes a bellwether, a bellwether?
•Bellwethers beyond defect prediction
•Are there bellwethers in other data?
62
In conclusion...
•Look for bellwethers
•To use as a baseline
•To justify the use of transfer learning
•Stabilize the pace of conclusions
•Not permanent conclusion stability
•Easy to find
•Look when necessary
•New data can be discarded
•Updated only as they start failing
63

More Related Content

PDF
ISSTA'16 Summer School: Intro to Statistics
PDF
Replication of Recommender Systems Research
PDF
DMTM Lecture 05 Data representation
PDF
Context-aware preference modeling with factorization
PDF
HT2014 Tutorial: Evaluating Recommender Systems - Ensuring Replicability of E...
PPTX
Comparative Recommender System Evaluation: Benchmarking Recommendation Frame...
PDF
Kaggle Days Madrid - Alberto Danese
PPTX
Utilizing additional information in factorization methods (research overview,...
ISSTA'16 Summer School: Intro to Statistics
Replication of Recommender Systems Research
DMTM Lecture 05 Data representation
Context-aware preference modeling with factorization
HT2014 Tutorial: Evaluating Recommender Systems - Ensuring Replicability of E...
Comparative Recommender System Evaluation: Benchmarking Recommendation Frame...
Kaggle Days Madrid - Alberto Danese
Utilizing additional information in factorization methods (research overview,...

What's hot (19)

PDF
OHBM 2016: Practical intensity based meta-analysis
PDF
Mathematical Background for Artificial Intelligence
PDF
Replicable Evaluation of Recommender Systems
PDF
DMTM Lecture 19 Data exploration
PPT
Presentazione Tesi Laurea Triennale in Informatica
PDF
Performance Evaluation for Classifiers tutorial
PPTX
TIE: A Framework for Embedding-based Incremental Temporal Knowledge Graph Com...
PPT
Contextual Information Elicitation in Travel Recommender Systems
PPTX
Multi-method Evaluation in Scientific Paper Recommender Systems
PDF
Designing Test Collections That Provide Tight Confidence Intervals
PPT
Download presentation source
PDF
Empirical Methods in Software Engineering - an Overview
PPTX
Machine learning yearning
PPTX
Bottle sum
PPTX
Exploring Data
PDF
Past and Future of Software Testing and Analysis
PDF
Presentation of Domain Specific Question Answering System Using N-gram Approach.
OHBM 2016: Practical intensity based meta-analysis
Mathematical Background for Artificial Intelligence
Replicable Evaluation of Recommender Systems
DMTM Lecture 19 Data exploration
Presentazione Tesi Laurea Triennale in Informatica
Performance Evaluation for Classifiers tutorial
TIE: A Framework for Embedding-based Incremental Temporal Knowledge Graph Com...
Contextual Information Elicitation in Travel Recommender Systems
Multi-method Evaluation in Scientific Paper Recommender Systems
Designing Test Collections That Provide Tight Confidence Intervals
Download presentation source
Empirical Methods in Software Engineering - an Overview
Machine learning yearning
Bottle sum
Exploring Data
Past and Future of Software Testing and Analysis
Presentation of Domain Specific Question Answering System Using N-gram Approach.
Ad

Similar to The “Bellwether” Effect and Its Implications to Transfer Learning (20)

PPTX
'A critique of testing' UK TMF forum January 2015
PDF
[WI 2017] Context Suggestion: Empirical Evaluations vs User Studies
PPTX
Dowhy: An end-to-end library for causal inference
PDF
Search quality in practice
PPTX
Lecture 3 for the AI course in A university
PPTX
Lecture3-eval.pptx
PPTX
FDS Unit I_PPT.pptx
PDF
Handling Missing Attributes using Matrix Factorization 
PPTX
Revised Design of Experiments and Analytical Techniques.pptx
PDF
Test design made easy (and fun) Rik Marselis EuroSTAR
PPTX
LETS PUBLISH WITH MORE RELIABLE & PRESENTABLE MODELLING.pptx
PPTX
Shared position in a project
PDF
2015 EDM Leopard for Adaptive Tutoring Evaluation
PPTX
CS194Lec0hbh6EDA.pptx
PDF
Déjà Vu: The Importance of Time and Causality in Recommender Systems
PPTX
Shared position in a project: testing and analysis
PDF
Recommending Scientific Papers: Investigating the User Curriculum
PDF
2019 dynamically composing_domain-data_selection_with_clean-data_selection_by...
PPT
AL slides.ppt
PPT
CPP09 - Testing
'A critique of testing' UK TMF forum January 2015
[WI 2017] Context Suggestion: Empirical Evaluations vs User Studies
Dowhy: An end-to-end library for causal inference
Search quality in practice
Lecture 3 for the AI course in A university
Lecture3-eval.pptx
FDS Unit I_PPT.pptx
Handling Missing Attributes using Matrix Factorization 
Revised Design of Experiments and Analytical Techniques.pptx
Test design made easy (and fun) Rik Marselis EuroSTAR
LETS PUBLISH WITH MORE RELIABLE & PRESENTABLE MODELLING.pptx
Shared position in a project
2015 EDM Leopard for Adaptive Tutoring Evaluation
CS194Lec0hbh6EDA.pptx
Déjà Vu: The Importance of Time and Causality in Recommender Systems
Shared position in a project: testing and analysis
Recommending Scientific Papers: Investigating the User Curriculum
2019 dynamically composing_domain-data_selection_with_clean-data_selection_by...
AL slides.ppt
CPP09 - Testing
Ad

Recently uploaded (20)

PDF
EN-Survey-Report-SAP-LeanIX-EA-Insights-2025.pdf
PDF
Digital Strategies for Manufacturing Companies
PDF
Designing Intelligence for the Shop Floor.pdf
PDF
wealthsignaloriginal-com-DS-text-... (1).pdf
PDF
top salesforce developer skills in 2025.pdf
PDF
T3DD25 TYPO3 Content Blocks - Deep Dive by André Kraus
PPTX
history of c programming in notes for students .pptx
PPTX
Operating system designcfffgfgggggggvggggggggg
PDF
Design an Analysis of Algorithms I-SECS-1021-03
PPTX
Oracle E-Business Suite: A Comprehensive Guide for Modern Enterprises
PDF
Understanding Forklifts - TECH EHS Solution
PDF
Design an Analysis of Algorithms II-SECS-1021-03
PDF
PTS Company Brochure 2025 (1).pdf.......
PPTX
Embracing Complexity in Serverless! GOTO Serverless Bengaluru
PDF
System and Network Administraation Chapter 3
PDF
Odoo Companies in India – Driving Business Transformation.pdf
PDF
Wondershare Filmora 15 Crack With Activation Key [2025
PDF
Nekopoi APK 2025 free lastest update
PDF
Softaken Excel to vCard Converter Software.pdf
PPTX
Computer Software and OS of computer science of grade 11.pptx
EN-Survey-Report-SAP-LeanIX-EA-Insights-2025.pdf
Digital Strategies for Manufacturing Companies
Designing Intelligence for the Shop Floor.pdf
wealthsignaloriginal-com-DS-text-... (1).pdf
top salesforce developer skills in 2025.pdf
T3DD25 TYPO3 Content Blocks - Deep Dive by André Kraus
history of c programming in notes for students .pptx
Operating system designcfffgfgggggggvggggggggg
Design an Analysis of Algorithms I-SECS-1021-03
Oracle E-Business Suite: A Comprehensive Guide for Modern Enterprises
Understanding Forklifts - TECH EHS Solution
Design an Analysis of Algorithms II-SECS-1021-03
PTS Company Brochure 2025 (1).pdf.......
Embracing Complexity in Serverless! GOTO Serverless Bengaluru
System and Network Administraation Chapter 3
Odoo Companies in India – Driving Business Transformation.pdf
Wondershare Filmora 15 Crack With Activation Key [2025
Nekopoi APK 2025 free lastest update
Softaken Excel to vCard Converter Software.pdf
Computer Software and OS of computer science of grade 11.pptx

The “Bellwether” Effect and Its Implications to Transfer Learning

  • 1. The “Bellwether” Effect Rahul Krishna (rkrish11@ncsu.edu) Tim Menzies, and Wei Fu And Its Implications to Transfer Learning 1
  • 2. 2WeTOSM ‘14 [Turhan09] Data from Turkish toasters can predict defects in NASA flight systems Today’s topic: Transfer Learning
  • 3. 3 Today’s topic: Simpler Transfer Learning with “Bell…. what?”
  • 4. Definitions Bellwether effect 4 • If a community builds many software projects • There exists one ∈ many from which • quality predictors can be built … • … and used for all Bellwether method • find the one • use it
  • 5. Definitions 5 • find the one • use it Note: vastly simpler than other transfer learning methods [Turhan09, Turhan11, Nam13, etc] Bellwether effect Bellwether method • If a community builds many software projects • There exists one ∈ many from which • quality predictors can be built … • … and used for all
  • 6. Outline ● Motivation ● Background ○ Evaluating Quality ○ Transfer Learning ○ The “Bellwether” ● Experimental Setup ○ Benchmark Data ○ Prediction Model ○ Statistical Measures ● Results ● Conclusions 6
  • 7. The “Cold-Start” Problem Past Projects Prediction Model Upcoming releases 7
  • 8. The “Cold-Start” Problem Past Projects Prediction Model ? 8 Upcoming releases
  • 9. Challenges: Variable Datasets ... “New projects are always emerging, and old ones are being rewritten…” … “the quality, representativeness, and volume of the training data have a major influence on the usefulness and stability of model performance…” — Rahman et al. [Rah12] Growing Volume Of Projects 9
  • 10. • Unstable conclusions are typical in SE [Menzies12] • Usefulness of some lesson “X” is contradictory Challenges: Conclusion Instability 10
  • 11. • Unstable conclusions are typical in SE [Menzies12] • Usefulness of some lesson “X” is contradictory Challenges: Conclusion Instability 11 Kitchenham et al. ‘07 • Are data from other organizations … • … as useful as local data? • Inconclusive • 3 cases: Just as good. 4 cases: Worse.
  • 12. • Unstable conclusions are typical in SE [Menzies12] • Usefulness of some lesson “X” is contradictory Challenges: Conclusion Instability 12 Zimmermann et al. ‘09 • 622 pairs of projects • Only 4% of pairs were useful Kitchenham et al. ‘07 • Are data from other organizations … • … as useful as local data? • Inconclusive • 3 cases: Just as good. 4 cases: Worse.
  • 13. • Menzies et al. [Men12] offer several ways • They ask for better experimental practice. • Is there a better way? •Yes! Look for the “Bellwether” • As long as the bellwether continues to offer good quality predictions •Then conclusions from one… •... are conclusions for all 13 How to Reduce this Instability?
  • 14. Outline ● Motivation ● Background ○ Evaluating Quality ○ Transfer Learning ○ The “Bellwether” ● Experimental Setup ○ Benchmark Data ○ Prediction Model ○ Statistical Measures ● Results ● Conclusions 14
  • 15. Estimating Quality Why not Static Analyzers? • [Rahman14] et al. compared • Code analysis tools: FindBugs, JLint, and PMD • with Static Code defect Predictors • Found no difference (measurement: AUCEC) 15 • And • Using lightweight parsers... • … Defect predictors can quickly jump to new languages • Same is not true for static code analysis tools • Lesser Bugs Better Software
  • 16. Estimating Quality Why not Static Analyzers? 16 • And • They work surprisingly well! • [Ostrand04]: ~80% of the bugs localized in 20% of the code
  • 17. Estimating Quality: Static code Defect Prediction 1. Ubiquitous • Researchers and Industrial practitioners frequently use them. Eg. Companies like Google [Lew14], V&V books [Raktin01] 2. A lot of (ongoing) research • Tremendous Attention [Nam13] • Better approaches are constantly being proposed 3. They are easy to use • Software Metrics can be collected fast • Wide variety of tools, open source data miners [sklearn][weka] 17
  • 18. Outline ● Motivation ● Background ○ Evaluating Quality ○ Transfer Learning ○ The “Bellwether” ● Experimental Setup ○ Benchmark Data ○ Prediction Model ○ Statistical Measures ● Results ● Conclusions 18
  • 19. Transfer Learning: Introduction • Extract knowledge from source (S) and apply to target (T) • Data needs to be massaged before use[Zhang15] • Careful sub-sampling • Transformation • Based on data source, TL is categorized as: • Homogeneous vs. Heterogeneous • Based on transformation[Nam13, Nam15, Jing15] • Similarity vs. Dimensionality 19
  • 20. Transfer Learning: Categories Homogeneous • Source (S) and Target (T) are quantified using the same attributes Heterogeneous • Source (S) and Target (T) are quantified using different attributes Similarity • Learn from subsampled rows/columns of the source (S) Dimensionality • Manipulate rows/columns of source (S) to match target (T) 20
  • 21. Heterogeneous • Source (S) and Target (T) are quantified using different attributes Dimensionality • Manipulate rows/columns of source (S) to match target (T) Transfer Learning: Categories Homogeneous • Source (S) and Target (T) are quantified using the same attributes Similarity • Learn from subsampled rows/columns of the source (S) This Talk 21
  • 22. Homogeneous TL: Burak Filter 22 • Burak[Tur09] used relevancy filtering • Filter using kNN • Gather two sets of data • Validation set (S) Test Data • Candidate set (T) Train Data • Use kNN • Pick “similar” instances from T • Filter T using S
  • 23. Homogeneous TL: Burak Filter • First study on relevancy • Their conclusion: 23 … The performances of defect predictors based on the NN-filtered data do not give necessary empirical evidence to make a strong conclusion … … Sometimes NN data based models may perform better than WC data based models …
  • 24. Homogeneous TL: Mixed Model Learner • Turhan et al.[Tur11] proposed a mixed-model learner • Combine local data with curated non-local data • Gather two sets of data • Validation set (S): Pick a random 10% of local data • Candidate set (T): Remaining 90% and non-local data • For non-local data, they use Burak filter[Tur09] • Experiment with various 90%-10% splits • 400 experiments were conducted to pick the best model 24
  • 25. Homogeneous TL: Mixed Model Learner • Extension to Burak Filter • Incorporated local data Challenges • Similar issues as Burak Filter • Biased; Unstable model. • The authors report: … mixed project models offer only limited improvements i.e., 3 out 10 projects — Turhan ‘11 25
  • 26. Homogeneous TL: Addressing the challenges • Researchers have offered a bleak view of TL • Zimmerman et al.[Zimm09] •Transfer is not always consistent •IE could learn from Firefox but not vice versa •Rahman et al.[Rahman12] •The “imprecision” of learning across projects • Recent research has resorted to more complex approaches 26
  • 27. More Transfer Learners … 27 WeTOSM ‘14
  • 28. Outline ● Motivation ● Background ○ Evaluating Quality ○ Transfer Learning ○ The “Bellwether” ● Experimental Setup ○ Benchmark Data ○ Prediction Model ○ Statistical Measures ● Results ● Conclusions 28
  • 29. Is this complexity necessary? • Short answer — No • Just look for the “Bellwether” •Use our bellwether method •Build your model •Et voilà! 29
  • 31. The Bellwether Method Generate • Project Pairs Pi , j • Perform a Leave-one-out Test Train on Pi Test on Pj • Pick the Project with the best model Apply Monitor #
  • 32. The Bellwether Method Generate Apply • Predict Quality on future projects Monitor #
  • 33. The Bellwether Method Generate Apply Monitor • When predictions fail. Restart. #
  • 35. Outline ● Motivation ● Background ○ Evaluating Quality ○ Transfer Learning ○ The “Bellwether” ● Experimental Setup ○ Benchmark Data ○ Prediction Model ○ Statistical Measures ● Results ● Conclusions 35
  • 36. Experiment Setup: Benchmark Data • 120 Datasets from 4 communities • Defects in 3 levels of granularity • File, Class, and Function • Open source and Proprietary 36
  • 37. Experiment Setup: Benchmark Data • BTW, Apache has local data • Multiple versions • Temporally ordered 37 A total of 54 datasets
  • 38. Outline ● Motivation ● Background ○ Evaluating Quality ○ Transfer Learning ○ The “Bellwether” ● Experimental Setup ○ Benchmark Data ○ Prediction Model ○ Statistical Measures ● Results ● Conclusions 38
  • 39. Experiment Setup: Prediction Model • We use Random Forests[Zimmerman08] • Build several decision trees from random subsamples • Use ensemble learning • Samples are imbalanced[Pelayo07] • More “clean” examples • Use SMOTE [Chawla01] to rebalance data* • Randomly down sample “clean” instances • Up-sample “buggy” instances *Apply only to training data 38
  • 40. Outline ● Motivation ● Background ○ Evaluating Quality ○ Transfer Learning ○ The “Bellwether” ● Experimental Setup ○ Benchmark Data ○ Prediction Model ○ Statistical Measures ● Results ● Conclusions 40
  • 41. Experiment Setup: Statistical Measures 41 • Prediction is usually measured using ROC • ROC is a plot of Recall vs. False Alarm • Plot requires several treatments • Obtained by cross validation. • We refrain from Cross-Validation • It tends to mix the test data with the bellwether • Instead, • We use Balance [Ma07]
  • 42. Experiment Setup: Statistical Measures 42 • Instead of a set of points for ROC, • Produce one point. • X, Y = Pd (Recall), Pf (False Alarm) • Balance is the weighted distance from the ideal point • Ideal Point => (Pd, Pf) = (1, 0) • Balance = • Lower the Balance, better the performance
  • 43. Experiment Setup: Statistical Measures • Prediction Model is inherently random • Rerun model 40 times with different seeds • Collect Balance measure in every run • Use Scott-Knott Test to compare Balance values • Scott-Knott ranks Balance values (best to worst) • Rank -> Effect Size Test + Hypothesis Test • Why SK? •It’s been used by recent high profile papers at TSE [Mittas13] and ICSE [Ghotra15] 43
  • 44. Outline ● Motivation ● Background ○ Evaluating Quality ○ Transfer Learning ○ The “Bellwether” ● Experimental Setup ○ Benchmark Data ○ Prediction Model ○ Statistical Measures ● Results ● Conclusions 44
  • 45. How rare are “Bellwethers”? How does the bellwether fare against local models? Is Bellwether better than other transfer learning methods? Can we predict which data set will be bellwether? How much of the “Bellwether” data is required? Results: Research Questions 45
  • 46. How rare are “Bellwethers”? How does the bellwether fare against local models? Is Bellwether better than other transfer learning methods? Can we predict which data set will be bellwether? How much of the “Bellwether” data is required? Results: Research Question 1 46
  • 47. Results: Research Question 1 47 Research Answer Our results suggest bellwethers are not rare. How rare are “Bellwethers”?
  • 48. How rare are “Bellwethers”? Community: Bellwether: Lucene Apache Results: Research Question 1 48
  • 49. How rare are “Bellwethers”? Community: Bellwether: MC NASA Results: Research Question 1 49
  • 50. How rare are “Bellwethers”? Community: Bellwether: LC AEEEM Results: Research Question 1 50
  • 51. How rare are “Bellwethers”? Community: Bellwether: Safe ReLink X=== Results: Research Question 1 51
  • 52. How rare are “Bellwethers”? How does the bellwether fare against local models? Is Bellwether better than other transfer learning methods? Can we predict which data set will be bellwether? How much of the “Bellwether” data is required? Results: Research Question 2 52
  • 53. How does the bellwether fare against local models? Research Answer For projects measured with the same quality metrics, training models with bellwether is just as good as — if not better than — local models Results: Research Question 2 53
  • 54. How rare are “Bellwethers”? How does the bellwether fare against local models? Is Bellwether better than other transfer learning methods? Can we predict which data set will be bellwether? How much of the “Bellwether” data is required? Results: Research Question 3 54
  • 55. Is Bellwether better than other transfer learning methods? Research Answer The bellwether outperforms standard homogeneous transfer learners. Results: Research Question 3 55
  • 56. How rare are “Bellwethers”? How does the bellwether fare against local models? Is Bellwether better than other transfer learning methods? Can we predict which data set will be bellwether? How much of the “Bellwether” data is required? Results: Research Question 4 56
  • 57. Can we predict which data set will be bellwether? Research Answer This is non-trivial. Trying to statistically determine if a project will be a bellwether was unsuccessful. This is open to further examination. Results: Research Question 4 57
  • 58. How rare are “Bellwethers”? How does the bellwether fare against local models? Is Bellwether better than other transfer learning methods? Can we predict which data set will be bellwether? How much of the “Bellwether” data is required? Results: Research Question 5 58
  • 59. How much data is required before detecting the “Bellwether”? Research Answer A few dozen defective samples from the bellwether is sufficient to build a reliable model Results: Research Question 5 59
  • 60. Outline ● Motivation ● Background ○ Evaluating Quality ○ Transfer Learning ○ The “Bellwether” ● Experimental Setup ○ Benchmark Data ○ Prediction Model ○ Statistical Measures ● Results ● Conclusions 60
  • 61. Practical Implications • The problem of generality in SE • Reproducibility is hard to achieve. • With Bellwethers Transfer Learners can • Not only be reproducible • But also be stable • and Reliable • Identification of Bellwether earlier • Would have changed course of research • More focus on coarse grain analysis • Less on relevancy filtering, model generation 61
  • 62. Future Work • Bellwethers in heterogeneous learners • Promising heterogeneous transfer learners [Nam15][Jing15] • Perform complex dimensionality mapping transforms • Can Bellwethers assist in finding the best mapping? • Study and quantify bellwether • what makes a bellwether, a bellwether? •Bellwethers beyond defect prediction •Are there bellwethers in other data? 62
  • 63. In conclusion... •Look for bellwethers •To use as a baseline •To justify the use of transfer learning •Stabilize the pace of conclusions •Not permanent conclusion stability •Easy to find •Look when necessary •New data can be discarded •Updated only as they start failing 63