SlideShare a Scribd company logo
Sonar Metrics
  Keheliya Gallaba
    WSO2 Inc.
Why collect metrics?
●   You cannot improve what you don’t measure
●   What you don’t measure, you cannot prove
●   Broken Window Theory
What to do?
●   Prevention is the best medicine
●   Planning and Prioritizing
●   Technical Debt Resolution
What to monitor?
    Duplicated code

   Coding standards

       Unit tests

     Complex code

     Potential bugs

       Comments

 Design and architecture
How to monitor?
●   Sonar Dashboard
      –   Lines of code
      –   Code Complexity
      –   Code Coverage
      –   Rules Compliance
●   Time Machine
●   Clouds & Hot spots
Demo
Metrics - Rules
●   Violations
       –   Total number of rule violations
●   New Violations
       –   Total number of new violations
●   xxxxx violations
       –   Number of violations with severity xxxxx, xxxxx being blocker, critical, major,
           minor or info
●   New xxxxx violations
       –   Number of new violations with severity xxxxx, xxxxx being blocker, critical,
           major, minor or info
●   Weighted violations
       –   Sum of the violations weighted by the coefficient associated at each priority
           (Sum(xxxxx_violations * xxxxx_weight))
       –   Default Weights: INFO=0;MINOR=1;MAJOR=3;CRITICAL=5;BLOCKER=10
●   Rules compliance index (violations_density)
       –   100 - weighted_violations / Lines of code * 100
Metrics - Size
●   Physical lines
       –   Number of carriage returns
●   Comment lines
       –   Number of javadoc, multi-comment and single-comment lines. Empty comment
           lines like, header file comments (mainly used to define the license) and
           commented-out lines of code are not included.
●   Commented-out lines of code
       –   Number of commented-out lines of code. Javadoc blocks are not scanned.
●   Lines of code (ncloc)
       –   Number of physical lines of code - number of blank lines - number of comment lines
           - number of header file comments - commented-out lines of code
●   Density of comment lines
       –   Number of comment lines / (lines of code + number of comments lines) * 100
       –   With such formula :
              50% means that there is the same number of lines of code and comment lines
              100% means that the file contains only comment lines and no lines of code
Metrics – Size (Contd.)
●   Packages
       –   Number of packages
●   Classes
       –   Number of classes including nested classes, interfaces, enums and annotations
●   Files
       –   Number of analyzed files
●   Directories
       –   Number of analyzed directories
●   Accessors
       –   Number of getter and setter methods used to get(reading) or set(writing) a class' property .
●   Methods
       –   Number of Methods without including accessors. A constructor is considered to be a method.
●   Public API
       –   Number of public classes, public methods (without accessors) and public properties (without public final static ones)
●   Public undocumented API
       –   Number of public API without a Javadoc block
●   Density of public documented API (public_documented_api_density)
       –   (Number of public API - Number of undocumented public API) / Number of public API * 100
●   Statements
       –   Number of statements as defined in the Java Language Specification but without block definitions. Statements counter gets
           incremented by one each time an expression, if, else, while, do, for, switch, break, continue, return, throw, synchronized, catch,
           finally is encountered..
       –   Statements counter is not incremented by a class, method, field, annotation definition or by a package and import declaration.
Metrics – Complexity
●   Complexity
      –   The Cyclomatic Complexity Number is also known as McCabe Metric. It all
          comes down to simply counting 'if', 'for', 'while' statements etc. in a method.
          Whenever the control flow of a method splits, the Cyclomatic counter gets
          incremented by one.
      –   Each method has a minimum value of 1 per default except accessors which
          are not considered as method and so don't increment complexity. For each of
          the following Java keywords/statements this value gets incremented by one:
           ●   If
           ●   For
           ●   While
           ●   Case
           ●   Catch
           ●   Throw
           ●   return (that isn't the last statement of a method)
           ●   &&
           ●   ||
           ●   ?
      –   Note that else, default, and finally don't increment the CCN value any further.
Metrics – Complexity
                    (continued..)
public void process(Car myCar){                              <- +1
         if (myCar.isNotMine()){                             <- +1
                        return;                              <- +1
    }
    car.paint("red");
    car.changeWheel();
    while(car.hasGazol() && car.getDriver().isNotStressed()){ <- +2
        car.drive();
    }
    return;
}
Metrics – Complexity
                        (Continued..)
●   Average complexity by method (function_complexity)
       –   Average cyclomatic complexity number by method


●   Complexity distribution by method
    (function_complexity_distribution)
       –   Number of methods for given complexities


●   Average complexity by class (class_complexity)
       –   Average cyclomatic complexity by class


●   Complexity distribution by class (class_complexity_distribution)
       –   Number of classes for given complexities


●   Average complexity by file (file_complexity)
       –   Average cyclomatic complexity by file
Metrics – Duplication
●   Duplicated lines (duplicated_lines)
       –   Number of physical lines touched by a duplication

●   Duplicated blocks (duplicated_blocks)
       –   Number of duplicated blocks of lines

●   Duplicated files (duplicated_files)
       –   Number of files involved in a duplication of lines

●   Density of duplicated lines
    (duplicated_lines_density)
       –   Duplicated lines / Physical lines * 100
Metrics – Tests
●   Unit tests (tests)
       –   Number of unit tests


●   Unit tests duration (test_execution_time)
       –   Time required to execute unit tests


●   Unit test error (test_errors)
       –   Number of unit tests that failed


●   Unit test failures (test_failures)
       –   Number of unit tests that failed with an unexpected exception


●   Unit test success density (test_success_density)
       –   (Unit tests - (errors + failures))/ Unit tests * 100


●   Skipped unit tests (skipped_tests)
       –   Number of skipped unit tests
●
Metrics – Tests (continued..)
●   Line Coverage (line_coverage)
      –   On a given line of code, line coverage simply answers the
          question: "Is this line of code executed during unit test
          execution?". At project level, this is the density of covered
          lines:
          Line coverage = LC / EL where
           ●   LC - lines covered (lines_to_cover – uncovered_lines
           ●   EL - total number of executable lines (lines_to_cover)
●   New Line Coverage (new_line_coverage)
      –   identical to line_coverage but restricted to new / update
          source code
Metrics – Tests (continued..)
●   Branch coverage (branch_coverage)
    ●   On each line of code containing some boolean expressions, the
        branch coverage simply answers the question: "Has each
        boolean expression evaluated both to true and false ?". At
        project level, this is the density of possible branches in flow
        control structures that have been followed.
            Branch coverage = (CT + CF) / (2*B) where
            CT - branches that evaluated to "true" at least once
            CF - branches that evaluated to "false" at least once
            (CT + CF = conditions_to_cover – uncovered_conditions)
            B - total number of branches (2*B = conditions_to_cover)


●   New Branch Coverage (new_branch_coverage)
        –   identical to branch_coverage but restricted to new / update source code
Metrics – Tests (continued..)
●   Coverage (coverage)
    ●   Coverage metric is a mix of the two previous line coverage and
        branch coverage metrics to get an even more accurate answer to the
        question "how much of a source-code is being executed by your unit
        tests?".
    ●   The Coverage is calculated with the following formula :
          coverage = (CT + CF + LC)/(2*B + EL) where
          CT - branches that evaluated to "true" at least once
          CF - branches that evaluated to "false" at least once
          LC - lines covered (lines_to_cover - uncovered_lines)


          B - total number of branches (2*B = conditions_to_cover)
          EL - total number of executable lines (lines_to_cover)


●   New Coverage (new_coverage)
    ●   identical to coverage but restricted to new / update source code
Metrics – Tests (continued..)
●   Conditions to Cover (conditions_to_cover)
      –   Total number of conditions which could be covered by unit tests.
●   New Conditions to Cover (new_conditions_to_cover)
      –   identical to conditions_to_cover but restricted to new / update source code
●   Lines to Cover (lines_to_cover)
      –   Total number of lines of code which could be covered by unit tests.
●   New Lines to Cover (new_lines_to_cover)
      –   identical to lines_to_cover but restricted to new / update source code
●   Uncovered Conditions (uncovered_conditions)
      –   Total number of conditions which are not covered by unit tests
●   New Uncovered Conditions (new_uncovered_conditions)
      –   identical to uncovered_conditions but restricted to new / update source code
●   Uncovered Lines (uncovered_lines)
      –   Total number of lines of code which are not covered by unit tests.
●   New Uncovered Lines (new_uncovered_lines)
      –   identical to uncovered_lines but restricted to new / update source code
Metrics – Design
●   Depth of inheritance tree (dit)
       –   The depth of inheritance tree (DIT) metric provides for each class a measure of the inheritance levels
           from the object hierarchy top. In Java where all classes inherit Object the minimum value of DIT is 1.
●   Number of children (noc)
       –   A class's number of children (NOC) metric simply measures the number of direct and indirect
           descendants of the class.
●   Response for class (rfc)
       –   The response set of a class is a set of methods that can potentially be executed in response to a
           message received by an object of that class. RFC is simply the number of methods in the set.
●   Afferent couplings (ca)
       –   A class's afferent couplings is a measure of how many other classes use the specific class.
●   Efferent couplings (ce)
       –   A class's efferent couplings is a measure of how many different classes are used by the specific class.
●   Lack of cohesion of methods (lcom4)
       –   LCOM4 measures the number of "connected components" in a class. A connected component is a set
           of related methods and fields. There should be only one such component in each class. If there are 2
           or more components, the class should be split into so many smaller classes.
Metrics – Design (continued..)
●   Package cycles (package_cycles)
      –   Minimal number of package cycles detected to be able to identify all undesired
          dependencies.


●   Package dependencies to cut (package_feedback_edges)
      –   Number of package dependencies to cut in order to remove all cycles between
          packages.


●   File dependencies to cut (package_tangles)
      –   Number of file dependencies to cut in order to remove all cycles between packages.


●   Package edges weight (package_edges_weight)
      –   Total number of file dependencies between packages.


●   Package tangle index (package_tangle_index)
      –   Gives the level of tangle of the packages, best value 0% meaning that there is no
          cycles and worst value 100% meaning that packages are really tangled. The index is
          calculated using : 2 * (package_tangles / package_edges_weight) * 100.
Metrics – Design (continued..)
●   File cycles (file_cycles)
       –   Minimal number of file cycles detected inside a package to be able to identify all
           undesired dependencies.


●   Suspect file dependencies (file_feedback_edges)
       –   File dependencies to cut in order to remove cycles between files inside a
           package. Warning : cycles are not always bad between files inside a package.


●   File tangle (file_tangles)
       –   file_tangles = file_feedback_edges.


●   File edges weight (file_edges_weight)
       –   Total number of file dependencies inside a package.


●   File tangle index (file_tangle_index)
       –   2 * (file_tangles / file_edges_weight) * 100.
Metrics – SCM
●   Commits
      –   The number of commits.
●   Last commit date
      –   The latest commit date on a resource.
●   Revision
      –   The latest revision of a resource.
●   Authors by line
      –   The last committer on each line of code.
●   Revisions by line
      –   The revision number on each line of code.
Coverage - Take-home Points:
●   Don’t use percentage metrics for coverage
●   Unit tests make code simpler, easier to
    understand
Compliance - Take-home Point:




 Don’t Change the Rules During the Game
Complexity - Take-home Point:




  Don’t Prohibit Complexity, Manage It.
Comments - Take-home Point:




    Code a little. Comment a little.
Architecture & Design - Take-
        home Point:




      Sonar-guided re-factoring
Live Sonar instance

http://nemo.sonarsource.org/
Technical Debt Plugin
Technical Debt Calculation
               Important metrics to look for


    duplicated_blocks

    violations – info_violations

    public_undocumented_api

    uncovered_complexity_by_tests (it is considered that
    80% of coverage is the objective)

    function_complexity_distribution >= 8,
    class_complexity_distribution >= 60

    package_edges_weight
Technical Debt Calculation

Debt(in man days) =

      cost_to_fix_duplications +
      cost_to_fix_violations +
      cost_to_comment_public_API +
      cost_to_fix_uncovered_complexity +
      cost_to_bring_complexity_below_threshold +
      cost_to_cut_cycles_at_package_level
Calculation of Debt ratio




Debt Ratio = (Current Debt / Total Possible
               Debt)* 100
Sonar in 2 minutes

http://sonar.codehaus.org/downloads/

               unzip

         sonar.sh console

         mvn sonar:sonar

       http://localhost:9000/
Installing Plug-ins



 Download the plug-in jar

Copy to extensions/plug-ins

      Restart Sonar
Some Useful Plugins

●   SQALE – Quality Model
●   Technical Debt
●   Eclipse/IntelliJ IDEA
●   JIRA Issues
●   Build Breaker
Thank You !




twitter.com/keheliya
keheliya@wso2.com           Image credit:
April 5, 2012               http://crimespace.ning.com/profiles/blogs/psychological-impact-of-the
                            http://agileandbeyond.blogspot.com/2011/05/velocity-handle-with-care.html

More Related Content

PDF
INTEGRATION TESTING
PDF
Introduction to Serverless with AWS Lambda
PDF
Prometheus Overview
PDF
Checkmarx meetup API Security - API Security top 10 - Erez Yalon
PDF
Functional programming in Scala
PPTX
Unit testing
DOCX
Automation Testing Syllabus - Checklist
PPT
Advanced topics in software engineering
INTEGRATION TESTING
Introduction to Serverless with AWS Lambda
Prometheus Overview
Checkmarx meetup API Security - API Security top 10 - Erez Yalon
Functional programming in Scala
Unit testing
Automation Testing Syllabus - Checklist
Advanced topics in software engineering

What's hot (20)

PDF
Software Engineering - chp4- design patterns
PPT
Introduction to Ruby on Rails
PPTX
Statistical Software Quality Assurance.pptx
PPT
Building large scale, job processing systems with Scala Akka Actor framework
PPTX
Software Quality Attributes
PDF
IDC - Blockchain Threat Model
PPTX
Software Quality Assurance
PPTX
Prometheus (Prometheus London, 2016)
PPTX
Bug tracking tool
PPTX
AWS API Gateway
PPTX
Sonar Overview
PPT
Software Reliability
PDF
Architecture at Scale
PPTX
Component-based Software Engineering
PPT
Software Engineering (Risk Management)
PPTX
software project management Waterfall model
PPTX
Avro introduction
PDF
CISSP Prep: Ch 8. Security Operations
PPT
Software design
PPTX
SDLC ITS MODEL AND SOFTWARE TESTING
Software Engineering - chp4- design patterns
Introduction to Ruby on Rails
Statistical Software Quality Assurance.pptx
Building large scale, job processing systems with Scala Akka Actor framework
Software Quality Attributes
IDC - Blockchain Threat Model
Software Quality Assurance
Prometheus (Prometheus London, 2016)
Bug tracking tool
AWS API Gateway
Sonar Overview
Software Reliability
Architecture at Scale
Component-based Software Engineering
Software Engineering (Risk Management)
software project management Waterfall model
Avro introduction
CISSP Prep: Ch 8. Security Operations
Software design
SDLC ITS MODEL AND SOFTWARE TESTING
Ad

Viewers also liked (12)

PDF
Effective Dashboard Design
PPTX
Maven overview
PPT
Alfresco Mavenisation
PPTX
Sonar system
PPT
PPTX
SONAR
PPT
Robot PowerPoint
PPTX
robotics ppt
PPTX
Basics of Robotics
PPTX
Robotics project ppt
PPTX
SEO in 2017/18
PPTX
Inside Google's Numbers in 2017
Effective Dashboard Design
Maven overview
Alfresco Mavenisation
Sonar system
SONAR
Robot PowerPoint
robotics ppt
Basics of Robotics
Robotics project ppt
SEO in 2017/18
Inside Google's Numbers in 2017
Ad

Similar to Sonar Metrics (20)

PPT
11 whiteboxtesting
PPT
Qat09 presentations dxw07u
PDF
Software Engineering : Software testing
PPTX
ADA_Module 1_MN.pptx- Analysis and design of Algorithms
PPTX
Code Metrics
PPTX
Building largescalepredictionsystemv1
PPT
Path testing, data flow testing
PDF
Pragmatic Code Coverage
PDF
EKON 23 Code_review_checklist
PDF
ST Module 3 vtu prescribed syllabus and scheme
PPTX
ANALYSIS AND DESIGN OF ALGORITHMS -M1-PPT
PDF
Taking your machine learning workflow to the next level using Scikit-Learn Pi...
PPT
AutoTest for software engineering for automated testing
PPT
Automation testing basics and tools presentation
PPT
AutoTest.ppt
PPT
AutoTest.ppt
PPT
AutoTest.ppt
PPT
Testing foundations
PPT
Chapter1.1 Introduction.ppt
PPT
Chapter1.1 Introduction to design and analysis of algorithm.ppt
11 whiteboxtesting
Qat09 presentations dxw07u
Software Engineering : Software testing
ADA_Module 1_MN.pptx- Analysis and design of Algorithms
Code Metrics
Building largescalepredictionsystemv1
Path testing, data flow testing
Pragmatic Code Coverage
EKON 23 Code_review_checklist
ST Module 3 vtu prescribed syllabus and scheme
ANALYSIS AND DESIGN OF ALGORITHMS -M1-PPT
Taking your machine learning workflow to the next level using Scikit-Learn Pi...
AutoTest for software engineering for automated testing
Automation testing basics and tools presentation
AutoTest.ppt
AutoTest.ppt
AutoTest.ppt
Testing foundations
Chapter1.1 Introduction.ppt
Chapter1.1 Introduction to design and analysis of algorithm.ppt

Recently uploaded (20)

PDF
NewMind AI Monthly Chronicles - July 2025
PDF
Blue Purple Modern Animated Computer Science Presentation.pdf.pdf
PPTX
PA Analog/Digital System: The Backbone of Modern Surveillance and Communication
PDF
Encapsulation theory and applications.pdf
PPTX
VMware vSphere Foundation How to Sell Presentation-Ver1.4-2-14-2024.pptx
PDF
Modernizing your data center with Dell and AMD
PDF
CIFDAQ's Market Insight: SEC Turns Pro Crypto
PDF
cuic standard and advanced reporting.pdf
PDF
Shreyas Phanse Resume: Experienced Backend Engineer | Java • Spring Boot • Ka...
PDF
TokAI - TikTok AI Agent : The First AI Application That Analyzes 10,000+ Vira...
PDF
Electronic commerce courselecture one. Pdf
PDF
Per capita expenditure prediction using model stacking based on satellite ima...
PPTX
Effective Security Operations Center (SOC) A Modern, Strategic, and Threat-In...
PDF
Diabetes mellitus diagnosis method based random forest with bat algorithm
PDF
The Rise and Fall of 3GPP – Time for a Sabbatical?
PPT
“AI and Expert System Decision Support & Business Intelligence Systems”
PPTX
KOM of Painting work and Equipment Insulation REV00 update 25-dec.pptx
PDF
Chapter 3 Spatial Domain Image Processing.pdf
PDF
How UI/UX Design Impacts User Retention in Mobile Apps.pdf
PDF
NewMind AI Weekly Chronicles - August'25 Week I
NewMind AI Monthly Chronicles - July 2025
Blue Purple Modern Animated Computer Science Presentation.pdf.pdf
PA Analog/Digital System: The Backbone of Modern Surveillance and Communication
Encapsulation theory and applications.pdf
VMware vSphere Foundation How to Sell Presentation-Ver1.4-2-14-2024.pptx
Modernizing your data center with Dell and AMD
CIFDAQ's Market Insight: SEC Turns Pro Crypto
cuic standard and advanced reporting.pdf
Shreyas Phanse Resume: Experienced Backend Engineer | Java • Spring Boot • Ka...
TokAI - TikTok AI Agent : The First AI Application That Analyzes 10,000+ Vira...
Electronic commerce courselecture one. Pdf
Per capita expenditure prediction using model stacking based on satellite ima...
Effective Security Operations Center (SOC) A Modern, Strategic, and Threat-In...
Diabetes mellitus diagnosis method based random forest with bat algorithm
The Rise and Fall of 3GPP – Time for a Sabbatical?
“AI and Expert System Decision Support & Business Intelligence Systems”
KOM of Painting work and Equipment Insulation REV00 update 25-dec.pptx
Chapter 3 Spatial Domain Image Processing.pdf
How UI/UX Design Impacts User Retention in Mobile Apps.pdf
NewMind AI Weekly Chronicles - August'25 Week I

Sonar Metrics

  • 1. Sonar Metrics Keheliya Gallaba WSO2 Inc.
  • 2. Why collect metrics? ● You cannot improve what you don’t measure ● What you don’t measure, you cannot prove ● Broken Window Theory
  • 3. What to do? ● Prevention is the best medicine ● Planning and Prioritizing ● Technical Debt Resolution
  • 4. What to monitor? Duplicated code Coding standards Unit tests Complex code Potential bugs Comments Design and architecture
  • 5. How to monitor? ● Sonar Dashboard – Lines of code – Code Complexity – Code Coverage – Rules Compliance ● Time Machine ● Clouds & Hot spots
  • 7. Metrics - Rules ● Violations – Total number of rule violations ● New Violations – Total number of new violations ● xxxxx violations – Number of violations with severity xxxxx, xxxxx being blocker, critical, major, minor or info ● New xxxxx violations – Number of new violations with severity xxxxx, xxxxx being blocker, critical, major, minor or info ● Weighted violations – Sum of the violations weighted by the coefficient associated at each priority (Sum(xxxxx_violations * xxxxx_weight)) – Default Weights: INFO=0;MINOR=1;MAJOR=3;CRITICAL=5;BLOCKER=10 ● Rules compliance index (violations_density) – 100 - weighted_violations / Lines of code * 100
  • 8. Metrics - Size ● Physical lines – Number of carriage returns ● Comment lines – Number of javadoc, multi-comment and single-comment lines. Empty comment lines like, header file comments (mainly used to define the license) and commented-out lines of code are not included. ● Commented-out lines of code – Number of commented-out lines of code. Javadoc blocks are not scanned. ● Lines of code (ncloc) – Number of physical lines of code - number of blank lines - number of comment lines - number of header file comments - commented-out lines of code ● Density of comment lines – Number of comment lines / (lines of code + number of comments lines) * 100 – With such formula : 50% means that there is the same number of lines of code and comment lines 100% means that the file contains only comment lines and no lines of code
  • 9. Metrics – Size (Contd.) ● Packages – Number of packages ● Classes – Number of classes including nested classes, interfaces, enums and annotations ● Files – Number of analyzed files ● Directories – Number of analyzed directories ● Accessors – Number of getter and setter methods used to get(reading) or set(writing) a class' property . ● Methods – Number of Methods without including accessors. A constructor is considered to be a method. ● Public API – Number of public classes, public methods (without accessors) and public properties (without public final static ones) ● Public undocumented API – Number of public API without a Javadoc block ● Density of public documented API (public_documented_api_density) – (Number of public API - Number of undocumented public API) / Number of public API * 100 ● Statements – Number of statements as defined in the Java Language Specification but without block definitions. Statements counter gets incremented by one each time an expression, if, else, while, do, for, switch, break, continue, return, throw, synchronized, catch, finally is encountered.. – Statements counter is not incremented by a class, method, field, annotation definition or by a package and import declaration.
  • 10. Metrics – Complexity ● Complexity – The Cyclomatic Complexity Number is also known as McCabe Metric. It all comes down to simply counting 'if', 'for', 'while' statements etc. in a method. Whenever the control flow of a method splits, the Cyclomatic counter gets incremented by one. – Each method has a minimum value of 1 per default except accessors which are not considered as method and so don't increment complexity. For each of the following Java keywords/statements this value gets incremented by one: ● If ● For ● While ● Case ● Catch ● Throw ● return (that isn't the last statement of a method) ● && ● || ● ? – Note that else, default, and finally don't increment the CCN value any further.
  • 11. Metrics – Complexity (continued..) public void process(Car myCar){ <- +1 if (myCar.isNotMine()){ <- +1 return; <- +1 } car.paint("red"); car.changeWheel(); while(car.hasGazol() && car.getDriver().isNotStressed()){ <- +2 car.drive(); } return; }
  • 12. Metrics – Complexity (Continued..) ● Average complexity by method (function_complexity) – Average cyclomatic complexity number by method ● Complexity distribution by method (function_complexity_distribution) – Number of methods for given complexities ● Average complexity by class (class_complexity) – Average cyclomatic complexity by class ● Complexity distribution by class (class_complexity_distribution) – Number of classes for given complexities ● Average complexity by file (file_complexity) – Average cyclomatic complexity by file
  • 13. Metrics – Duplication ● Duplicated lines (duplicated_lines) – Number of physical lines touched by a duplication ● Duplicated blocks (duplicated_blocks) – Number of duplicated blocks of lines ● Duplicated files (duplicated_files) – Number of files involved in a duplication of lines ● Density of duplicated lines (duplicated_lines_density) – Duplicated lines / Physical lines * 100
  • 14. Metrics – Tests ● Unit tests (tests) – Number of unit tests ● Unit tests duration (test_execution_time) – Time required to execute unit tests ● Unit test error (test_errors) – Number of unit tests that failed ● Unit test failures (test_failures) – Number of unit tests that failed with an unexpected exception ● Unit test success density (test_success_density) – (Unit tests - (errors + failures))/ Unit tests * 100 ● Skipped unit tests (skipped_tests) – Number of skipped unit tests ●
  • 15. Metrics – Tests (continued..) ● Line Coverage (line_coverage) – On a given line of code, line coverage simply answers the question: "Is this line of code executed during unit test execution?". At project level, this is the density of covered lines: Line coverage = LC / EL where ● LC - lines covered (lines_to_cover – uncovered_lines ● EL - total number of executable lines (lines_to_cover) ● New Line Coverage (new_line_coverage) – identical to line_coverage but restricted to new / update source code
  • 16. Metrics – Tests (continued..) ● Branch coverage (branch_coverage) ● On each line of code containing some boolean expressions, the branch coverage simply answers the question: "Has each boolean expression evaluated both to true and false ?". At project level, this is the density of possible branches in flow control structures that have been followed. Branch coverage = (CT + CF) / (2*B) where CT - branches that evaluated to "true" at least once CF - branches that evaluated to "false" at least once (CT + CF = conditions_to_cover – uncovered_conditions) B - total number of branches (2*B = conditions_to_cover) ● New Branch Coverage (new_branch_coverage) – identical to branch_coverage but restricted to new / update source code
  • 17. Metrics – Tests (continued..) ● Coverage (coverage) ● Coverage metric is a mix of the two previous line coverage and branch coverage metrics to get an even more accurate answer to the question "how much of a source-code is being executed by your unit tests?". ● The Coverage is calculated with the following formula : coverage = (CT + CF + LC)/(2*B + EL) where CT - branches that evaluated to "true" at least once CF - branches that evaluated to "false" at least once LC - lines covered (lines_to_cover - uncovered_lines) B - total number of branches (2*B = conditions_to_cover) EL - total number of executable lines (lines_to_cover) ● New Coverage (new_coverage) ● identical to coverage but restricted to new / update source code
  • 18. Metrics – Tests (continued..) ● Conditions to Cover (conditions_to_cover) – Total number of conditions which could be covered by unit tests. ● New Conditions to Cover (new_conditions_to_cover) – identical to conditions_to_cover but restricted to new / update source code ● Lines to Cover (lines_to_cover) – Total number of lines of code which could be covered by unit tests. ● New Lines to Cover (new_lines_to_cover) – identical to lines_to_cover but restricted to new / update source code ● Uncovered Conditions (uncovered_conditions) – Total number of conditions which are not covered by unit tests ● New Uncovered Conditions (new_uncovered_conditions) – identical to uncovered_conditions but restricted to new / update source code ● Uncovered Lines (uncovered_lines) – Total number of lines of code which are not covered by unit tests. ● New Uncovered Lines (new_uncovered_lines) – identical to uncovered_lines but restricted to new / update source code
  • 19. Metrics – Design ● Depth of inheritance tree (dit) – The depth of inheritance tree (DIT) metric provides for each class a measure of the inheritance levels from the object hierarchy top. In Java where all classes inherit Object the minimum value of DIT is 1. ● Number of children (noc) – A class's number of children (NOC) metric simply measures the number of direct and indirect descendants of the class. ● Response for class (rfc) – The response set of a class is a set of methods that can potentially be executed in response to a message received by an object of that class. RFC is simply the number of methods in the set. ● Afferent couplings (ca) – A class's afferent couplings is a measure of how many other classes use the specific class. ● Efferent couplings (ce) – A class's efferent couplings is a measure of how many different classes are used by the specific class. ● Lack of cohesion of methods (lcom4) – LCOM4 measures the number of "connected components" in a class. A connected component is a set of related methods and fields. There should be only one such component in each class. If there are 2 or more components, the class should be split into so many smaller classes.
  • 20. Metrics – Design (continued..) ● Package cycles (package_cycles) – Minimal number of package cycles detected to be able to identify all undesired dependencies. ● Package dependencies to cut (package_feedback_edges) – Number of package dependencies to cut in order to remove all cycles between packages. ● File dependencies to cut (package_tangles) – Number of file dependencies to cut in order to remove all cycles between packages. ● Package edges weight (package_edges_weight) – Total number of file dependencies between packages. ● Package tangle index (package_tangle_index) – Gives the level of tangle of the packages, best value 0% meaning that there is no cycles and worst value 100% meaning that packages are really tangled. The index is calculated using : 2 * (package_tangles / package_edges_weight) * 100.
  • 21. Metrics – Design (continued..) ● File cycles (file_cycles) – Minimal number of file cycles detected inside a package to be able to identify all undesired dependencies. ● Suspect file dependencies (file_feedback_edges) – File dependencies to cut in order to remove cycles between files inside a package. Warning : cycles are not always bad between files inside a package. ● File tangle (file_tangles) – file_tangles = file_feedback_edges. ● File edges weight (file_edges_weight) – Total number of file dependencies inside a package. ● File tangle index (file_tangle_index) – 2 * (file_tangles / file_edges_weight) * 100.
  • 22. Metrics – SCM ● Commits – The number of commits. ● Last commit date – The latest commit date on a resource. ● Revision – The latest revision of a resource. ● Authors by line – The last committer on each line of code. ● Revisions by line – The revision number on each line of code.
  • 23. Coverage - Take-home Points: ● Don’t use percentage metrics for coverage ● Unit tests make code simpler, easier to understand
  • 24. Compliance - Take-home Point: Don’t Change the Rules During the Game
  • 25. Complexity - Take-home Point: Don’t Prohibit Complexity, Manage It.
  • 26. Comments - Take-home Point: Code a little. Comment a little.
  • 27. Architecture & Design - Take- home Point: Sonar-guided re-factoring
  • 30. Technical Debt Calculation Important metrics to look for  duplicated_blocks  violations – info_violations  public_undocumented_api  uncovered_complexity_by_tests (it is considered that 80% of coverage is the objective)  function_complexity_distribution >= 8, class_complexity_distribution >= 60  package_edges_weight
  • 31. Technical Debt Calculation Debt(in man days) = cost_to_fix_duplications + cost_to_fix_violations + cost_to_comment_public_API + cost_to_fix_uncovered_complexity + cost_to_bring_complexity_below_threshold + cost_to_cut_cycles_at_package_level
  • 32. Calculation of Debt ratio Debt Ratio = (Current Debt / Total Possible Debt)* 100
  • 33. Sonar in 2 minutes http://sonar.codehaus.org/downloads/ unzip sonar.sh console mvn sonar:sonar http://localhost:9000/
  • 34. Installing Plug-ins Download the plug-in jar Copy to extensions/plug-ins Restart Sonar
  • 35. Some Useful Plugins ● SQALE – Quality Model ● Technical Debt ● Eclipse/IntelliJ IDEA ● JIRA Issues ● Build Breaker
  • 36. Thank You ! twitter.com/keheliya keheliya@wso2.com Image credit: April 5, 2012 http://crimespace.ning.com/profiles/blogs/psychological-impact-of-the http://agileandbeyond.blogspot.com/2011/05/velocity-handle-with-care.html