SlideShare a Scribd company logo
SCM Dashboard
Monitoring Code Velocity at the Product /
Project / Branch level


Prakash Ranade
AGENDA



   •  What is SCM Dashboard?
   •  Why is SCM Dashboard needed?
   •  Where is it used?
   •  How does it look?
   •  Challenges in building SCM Dashboard
   •  Goals in designing SCM Dashboard
   •  Technology in building SCM Dashboard
   •  Conclusion
What is SCM Dashboard?



•  A framework for organizing, automating, and analyzing
software configuration methodologies, metrics, processes, and
systems that drive product release performance.

•  The Dashboard gathers, organizes, and stores information
from various internal data sources and displays metrics that are
the result of simple or complex calculations with minimal
processing time.

•  Decision support system that provides historical data and
current trends in its portlet region, showing metrics/reports side-
by-side on the same web page.
Why is SCM Dashboard needed?


You are not able to manage what you can not measure.

•  The Dashboard is an easy way to enhance visibility on the
product releases, such as showing how you do compared to
previous performances, goals and benchmarks.

What gets watched, will get done.
•  Ability to make more informed decisions based on multiple reports.

Not only for the executives, but for all levels of engineering.
•  Release Manager, Director
•  Development, QA Manager,
•  Developer, QA
Who needs metrics?

                                     Type of
                                  files, lines,
                            Dev   change, file
                                      churn

               Bug fixes,
              # changes,           Dev
    QA        depot churn         Manager


                                                  Bug trends,
                                                   Perforce
   QA                                               Trends
 Manager


                 SCM                                              Director
               Dashboard
                 Team

                                                     Bug fixes,
                                                      branch
                                                  stability reports
How does it look?
How does it look?
Data challenges




                    SB, TB, OB           Multiple Build
                     Systems
                                         Environments
                 Has gone through
                       multiple
                transformations. No               Complex
                 initial values were
               recorded. Some fields
                                                  Bugzilla data
                have multiple values.



     Above 3 million changes, more than 5000
                                                       Large Perforce
    branches, and an archive consisting of 2 TB        Repository
                       data.
Dashboard Goals




         Speed                      Sharing                     Portal

•  Max. 5 seconds            •  Social Engineering     •  Ability to configure
   response time for the     •  Easy to share charts      multiple metrics on a
   requests                                               single page.
                                and reports among
•  Provides frequent, or        team members           •  Ability to fine tune
   at least daily, updates                                settings and filters on
                             •  Easy to make project
•  Bases project status                                   charts and reports.
                                dashboards
   on incremental data                                 •  Ability to drill downs
   updates                                                and form
                                                          aggregations.
Building blocks
An Architecture based on Hadoop and MongoDB

•  Hadoop is a open-source software used for breaking a big job
into smaller tasks, performing each task and collecting the results.
•  MapReduce is a programming model for data processing,
working by breaking the processing into two phases, a map phase
and a reduce phase.
•  Hadoop streaming is a utility that comes with the distribution,
allowing you to create and run MapReduce jobs in Python.
•  The HDFS is a filesystem that stores large files across multiple
machines and achieves reliability by replicating the data across
multiple hosts.
•  MongoDB is a document based database system. Each document
can be thought of as a large hash object. There are keys(columns)
with values which can be anything such as hashes, arrays,
numbers, serialized objects, etc.
Perforce Branch:

Our Perforce branch exists on multiple perforce servers. Our branch
specification looks like this.

•  server1:1666
 //depot/<component>/<old-branch>/… //depot/<component>/<new-branch>/…


•  server2:1666
 //depot/<component2>/<old-branch>/… //depot/<component2>/<new-branch>/…
 //depot/<component3>/<old-branch>/… //depot/<component3>/<new-branch>/…


•  server3:1666
 //depot/<component4>/<old-branch>/… //depot/<component4>/<new-branch>/…
Branch policies


•  Branch Manager identifies and lists new feature/bugs, improvements in
Bugzilla and Perforce BMPS, and then sets the check-in policies on the
branch and change specification forms.
Change 1359870 by pranade@pranade-prism1 on 2011/04/27 17:31:36
    Implement Prism View...
    QA Notes:
    Testing Done: Perforce Create, Update, delete view
    Bug Number: 703648, 703649
   Approved by: daf
    Reviewed by: gaddamk, akalaveshi
    Review URL: https://reviewboard.eng.vmware.com/r/227466/
    #You may set automerge requests to YES|NO|MANUAL below,
    #with at most one being set to YES.
    Merge to: MAIN: YES
    Merge to: Release: NO

Affected files ...

... //depot/component-1/branch-1/views.py#12 edit
... //depot/component-1/branch-1/templates/vcs/perforce.html#15 edit
... //depot/component-1/branch-1/tests.py#1 add
... //depot/component-1/branch-1/utils.py#14 delete

Differences ...
Perforce Data collection

•  “p4 describe” displays the details of the changeset, as follows:
   The changelist number
   The changelist creator name and workspace name
   The date when the changelist created
   The changelist’s description
   The submitted file lists and the code diffs


•  We have a Perforce data dumper script which connect to
perforce servers and dumps the “p4 describe” output of the
submitted changelist.

•  The Perforce data dumper script dumps output in 64 MB file
chunks, which are then copied to HDFS.
MapReduce

•  We have a Perforce data dumper script which connect to perforce
servers and dumps the “p4 describe” output of the submitted
changelist. Each MapReduce script scans all the information from a
“p4 describe” output. The following reports can be created by writing
different MapReduce scripts:
        Number of submitted changes per depot path
        File information like add, edit, integrate, branch, delete
        File types such as “c”, “py”, “pl”, “java”, etc.
        Number of lines added, removed, modified
        Most revised files and least revised files
        Bug number and bug status
        Reviewers and test case information
        Change submitter names and group mapping
        Depot path and branch spec mapping
Python MapReduce

•  MapReduce programs are much easier to develop in a scripting
language using the Streaming API tool. Hadoop MapReduce provides
automatic parallelization and distribution, fault-tolerance, and status
and monitoring tools.

•  Hadoop Streaming interacts with programs that use the Unix
streaming paradigm. Inputs come in through STDIN and outputs go to
STDOUT. The data has to be text based and each line is considered a
record. The overall data flow in Hadoop streaming is like a pipe where
data streams in through the mapper and the sorted output streams out
through the reducer. In pseudo-code using Unix’s command line
notation, it comes up as the following:

        cat [input_file] | [mapper] | sort | [reducer] > [output_file]
Process

    Combined	
  p4	
  
    describe	
  output	
  
    from	
  all	
  servers	
  
    in	
  64MBchunks
                                                          Hadoop	
  
                                                          Parallelism	
  
                                                          And	
  HDFS
                                           map                                               Schemaless,	
  
  Split files                                                                                Document	
  
  of p4                                                                                      Storage	
  
  describe
                                           map                                               System
                                                             reduce         part-­‐01

  64 MB file
  size
                                           map
                                                             reduce         part-­‐02       Changes,Lines,
                                                                                            Files,Users,	
  
                                                                                            churn	
  
                                           map                                              metadata
                                                             reduce         part-­‐03
  Split files

                                           map



                                                                                           • changes
                           • p4 server A                                                   • lines
           p4
                                                 hadoop    • MapReduce           mongoDB
        describe           • p4 server B                                                   • files
                                                           • MapReduce
                           • p4 server C                                                   • users
def dump_to_reducer(srvr, chng, depotfiles):               def main():
  if srvr and depotfiles and chng:                            depot2count = {}
      for filename in depotfiles:                             final_changes = {}
         print "%s|%st%s" % (srvr, filename, str(chng))       for line in sys.stdin:
                                                                 try:
def main():                                                          p4srvr_depotpath, date_chng = line.split('t',1)
  chng, depot_files, l = 0, set(), os.linesep                    except:
  p4srvr = site_perforce_servers(site.perforce_servers)              continue
  for line in sys.stdin:                                          if (not p4srvr_depotpath) and (not date_chng):
    line = line.rstrip(l)                                            print >> sys.stderr, line
    if line and line.count('/')==80:                                 continue
         srvr = match_begin_line(line, p4srvr)                   dt, change = date_chng.split('.')
         if srvr:                                                change = change.rstrip(l)
             chng, depot_files = 0, set()                        depot_hash = depot2count.setdefault
             continue                                      (p4srvr_depotpath,{})
    if line and line.count('%')==80:                             depot_hash.setdefault(dt,0)
         srvr = match_end_line(line, p4srvr)                     chng_set = depot2count[p4srvr_depotpath][dt]
         if srvr:                                                depot2count[p4srvr_depotpath][dt] = int(change)
             dump_to_reducer(srvr, chng, depot_files)          for (p4srvr_depotpath, dt) in depot2count.items():
             continue                                            for (dt, chngset) in dt.items():
     if line and line[0:7]=='Change ':                               print json.dumps
         chng = dtgrep(line)                               ({'p4srvr_depotpath':p4srvr_depotpath, 'date': dt,
         continue                                          'changes': chngset})
     if line and line[0:6]=='... //':
         flgrep(line, depot_files)




                                  Python                                                     Python
                                  Mapper script                                              Reducer script
mdb = mongo_utils.Vcs_Stats(collection_name="depot_churn")

   mdb.collection.create_index([('p4srvr_depotpath', pymongo.ASCENDING), ('date',
pymongo.ASCENDING)])

     for line in datafile.readlines():
           data = json.loads(line)
           p4srvr_depotpath = "%s" % data['p4srvr_depotpath']
           dstr = data['date']
           yy, mm, dd, hh, MM, ss = dstr[0:4], dstr[4:6], dstr[6:8], dstr[8:10], dstr[10:12], dstr
[12:14]
           changes = data['changes']
           new_data = []
           mongo_data = {'p4srvr_depotpath':p4srvr_depotpath,
                         'date‘:datetime.datetime(yy,mm,dd,hh,MM,ss),
                            'changes':changes, '_id':"%s/%s:%s"%
(p4srvr_depotpath,dstr,changes)}
           mdb.collection.insert(mongo_data)
      mdb.collection.ensure_index([('p4srvr_depotpath', pymongo.ASCENDING), ('date',
pymongo.ASCENDING)])




                                                                                 mongodb
                                                                                 upload script
/* 0 */
{
  "_id": "perforce-server1:1666|//depot/component-1/branch-1/20110204005204:1290141",
  "date": "Thu, 03 Feb 2011 16:52:04 GMT -08:00",
  "p4srvr_depotpath": "perforce-server1:1666|//depot/component-1/esx41p01-hp4/",
  "changes": 1290141,
  "user": "pranade",
  "total_dict": {
    "all": "9",
    "branch": "9"
  }
}
/* 1 */
{
  "_id": "perforce-server1:1666|//depot/component-2/branch-2/20100407144638:1029666",
  "date": "Wed, 07 Apr 2010 07:46:38 GMT -07:00",
  "p4srvr_depotpath": "perforce-server1:1666|//depot/component-2/branch-2/",
  "changes": 1029666,
  "user": "akalaveshi",
  "total_dict": {
    "edit": "3",
    "all": "3"
  }
}
/* 2 */
{
  "_id": "perforce-server1:1666|//depot/component-2/branch-2/20100106003808:976075",
  "date": "Tue, 05 Jan 2010 16:38:08 GMT -08:00",
  "p4srvr_depotpath": "perforce-server1:1666|//depot/component-2/branch-2/",
  "changes": 976075,
  "user": "pranade",
  "total_dict": {
    "integrate": "10",
    "edit": "2",
    "all": "12"
  }                                                                                     mongodb data
}
Conclusion


•  We have designed a framework called SCM Dashboard.

•  “p4 describe” command contains most of the information.

•  Hadoop: horizontally scalable computational solution.
Streaming makes MapReduce programming easy.

•  Mongodb: Document model, dynamic queries, comprehensive
data models.
QUESTIONS?

More Related Content

PPT
Document imaging 101 Imaging 101 using SAP's Content Server
PDF
Bi303 data warehousing with fast track and pdw - Assaf Fraenkel
PPTX
MapReduce Paradigm
PDF
User Group Bi
PPTX
Lync 2010 High Availability
PDF
Document Imaging Tools and Strategies to Accelerate Your Accounts Payable Act...
PPTX
Lync Server 2010: High Availability [I3004]
PDF
Document Imaging and the SAP Content Server 101
Document imaging 101 Imaging 101 using SAP's Content Server
Bi303 data warehousing with fast track and pdw - Assaf Fraenkel
MapReduce Paradigm
User Group Bi
Lync 2010 High Availability
Document Imaging Tools and Strategies to Accelerate Your Accounts Payable Act...
Lync Server 2010: High Availability [I3004]
Document Imaging and the SAP Content Server 101

What's hot (18)

PPT
Document Imaging - SAP Content Server and the Accounting Department
PPT
Hw09 Production Deep Dive With High Availability
PPTX
Document Imaging and the SAP Content Server 101
PPTX
Hadoop Summit 2012 | Optimizing MapReduce Job Performance
PPT
Document Imaging in Finance
PDF
Map Reduce An Introduction
PPT
Goods Receipt Document Imaging
PDF
The fillmore-group-aese-presentation-111810
PPTX
Hana Offerings Engl
PDF
August 2013 HUG: Compression Options in Hadoop - A Tale of Tradeoffs
PDF
Optimizing MapReduce Job performance
PDF
DesignCon 2004 - OpenAccess Migration - Design Environment Integration
PPTX
Skillwise-IMS DB
PPT
Use the SAP Content Server for Your Document Imaging and Archiving Needs!
PDF
HDFS Futures: NameNode Federation for Improved Efficiency and Scalability
PDF
Liquidity Risk Management powered by SAP HANA
PDF
IBM DB2 Analytics Accelerator Trends & Directions by Namik Hrle
PDF
Couchbase Server and IBM BigInsights: One + One = Three
Document Imaging - SAP Content Server and the Accounting Department
Hw09 Production Deep Dive With High Availability
Document Imaging and the SAP Content Server 101
Hadoop Summit 2012 | Optimizing MapReduce Job Performance
Document Imaging in Finance
Map Reduce An Introduction
Goods Receipt Document Imaging
The fillmore-group-aese-presentation-111810
Hana Offerings Engl
August 2013 HUG: Compression Options in Hadoop - A Tale of Tradeoffs
Optimizing MapReduce Job performance
DesignCon 2004 - OpenAccess Migration - Design Environment Integration
Skillwise-IMS DB
Use the SAP Content Server for Your Document Imaging and Archiving Needs!
HDFS Futures: NameNode Federation for Improved Efficiency and Scalability
Liquidity Risk Management powered by SAP HANA
IBM DB2 Analytics Accelerator Trends & Directions by Namik Hrle
Couchbase Server and IBM BigInsights: One + One = Three
Ad

Viewers also liked (20)

PDF
Git 簡介(古時候的簡報備份)
PDF
Git scm-final
PPTX
Lightning Talk: Git VCS
PDF
PPTX
Demand flow poster slides rick (1)
PDF
ビッグデータ関連Oss動向調査とニーズ分析
PDF
初心者 Git 上手攻略
PDF
18 misconceptions in Enterprise Mobility
PPTX
Rent the Resort
PPTX
Website design
PPTX
Trabajo ampliación 1.1
ZIP
Help Restore Dell Place Ravine
PDF
Sb 85 Alberta's Competitive Review Presentation
PPT
Teachers Social Bookmarking
PPT
заявка цгб № 3 на участие в фотоконкурсе
PPTX
Wk hoofdstuk 17
DOC
20150527衛環委員會 - 醫療費用行動支付與遠距醫療健康照護
PDF
Building your team ccf day 3
DOCX
Herramientas de sistemas
Git 簡介(古時候的簡報備份)
Git scm-final
Lightning Talk: Git VCS
Demand flow poster slides rick (1)
ビッグデータ関連Oss動向調査とニーズ分析
初心者 Git 上手攻略
18 misconceptions in Enterprise Mobility
Rent the Resort
Website design
Trabajo ampliación 1.1
Help Restore Dell Place Ravine
Sb 85 Alberta's Competitive Review Presentation
Teachers Social Bookmarking
заявка цгб № 3 на участие в фотоконкурсе
Wk hoofdstuk 17
20150527衛環委員會 - 醫療費用行動支付與遠距醫療健康照護
Building your team ccf day 3
Herramientas de sistemas
Ad

Similar to SCM Dashboard (20)

PPTX
MapReduce Paradigm
PDF
Introduction to Hadoop
PPT
Apache hadoop, hdfs and map reduce Overview
PPTX
Mapreduce is for Hadoop Ecosystem in Data Science
PPTX
HBaseCon 2012 | Building a Large Search Platform on a Shoestring Budget
PPT
A Scalable Data Transformation Framework using the Hadoop Ecosystem
PPTX
Big data Hadoop
PDF
Hadoop on Azure, Blue elephants
PPTX
A Scalable Data Transformation Framework using Hadoop Ecosystem
PPTX
Hadoop and It_s Components_PPT .pptx
PPT
Map reducecloudtech
DOCX
Prashanth Kumar_Hadoop_NEW
KEY
Processing Big Data
PPTX
YARN Ready: Integrating to YARN with Tez
PPT
Hadoop by sunitha
PPTX
Apache Tez : Accelerating Hadoop Query Processing
PPTX
PPTX
PPTX
Cppt Hadoop
MapReduce Paradigm
Introduction to Hadoop
Apache hadoop, hdfs and map reduce Overview
Mapreduce is for Hadoop Ecosystem in Data Science
HBaseCon 2012 | Building a Large Search Platform on a Shoestring Budget
A Scalable Data Transformation Framework using the Hadoop Ecosystem
Big data Hadoop
Hadoop on Azure, Blue elephants
A Scalable Data Transformation Framework using Hadoop Ecosystem
Hadoop and It_s Components_PPT .pptx
Map reducecloudtech
Prashanth Kumar_Hadoop_NEW
Processing Big Data
YARN Ready: Integrating to YARN with Tez
Hadoop by sunitha
Apache Tez : Accelerating Hadoop Query Processing
Cppt Hadoop

More from Perforce (20)

PDF
How to Organize Game Developers With Different Planning Needs
PDF
Regulatory Traceability: How to Maintain Compliance, Quality, and Cost Effic...
PDF
Efficient Security Development and Testing Using Dynamic and Static Code Anal...
PDF
Understanding Compliant Workflow Enforcement SOPs
PDF
Branching Out: How To Automate Your Development Process
PDF
How to Do Code Reviews at Massive Scale For DevOps
PDF
How to Spark Joy In Your Product Backlog
PDF
Going Remote: Build Up Your Game Dev Team
PDF
Shift to Remote: How to Manage Your New Workflow
PPTX
Hybrid Development Methodology in a Regulated World
PPTX
Better, Faster, Easier: How to Make Git Really Work in the Enterprise
PDF
Easier Requirements Management Using Diagrams In Helix ALM
PDF
How To Master Your Mega Backlog
PDF
Achieving Software Safety, Security, and Reliability Part 3: What Does the Fu...
PDF
How to Scale With Helix Core and Microsoft Azure
PDF
Achieving Software Safety, Security, and Reliability Part 2
PDF
Should You Break Up With Your Monolith?
PDF
Achieving Software Safety, Security, and Reliability Part 1: Common Industry ...
PDF
What's New in Helix ALM 2019.4
PDF
Free Yourself From the MS Office Prison
How to Organize Game Developers With Different Planning Needs
Regulatory Traceability: How to Maintain Compliance, Quality, and Cost Effic...
Efficient Security Development and Testing Using Dynamic and Static Code Anal...
Understanding Compliant Workflow Enforcement SOPs
Branching Out: How To Automate Your Development Process
How to Do Code Reviews at Massive Scale For DevOps
How to Spark Joy In Your Product Backlog
Going Remote: Build Up Your Game Dev Team
Shift to Remote: How to Manage Your New Workflow
Hybrid Development Methodology in a Regulated World
Better, Faster, Easier: How to Make Git Really Work in the Enterprise
Easier Requirements Management Using Diagrams In Helix ALM
How To Master Your Mega Backlog
Achieving Software Safety, Security, and Reliability Part 3: What Does the Fu...
How to Scale With Helix Core and Microsoft Azure
Achieving Software Safety, Security, and Reliability Part 2
Should You Break Up With Your Monolith?
Achieving Software Safety, Security, and Reliability Part 1: Common Industry ...
What's New in Helix ALM 2019.4
Free Yourself From the MS Office Prison

Recently uploaded (20)

PPTX
Cloud computing and distributed systems.
PDF
The Rise and Fall of 3GPP – Time for a Sabbatical?
PPTX
Detection-First SIEM: Rule Types, Dashboards, and Threat-Informed Strategy
PPTX
Digital-Transformation-Roadmap-for-Companies.pptx
PDF
Agricultural_Statistics_at_a_Glance_2022_0.pdf
PDF
Empathic Computing: Creating Shared Understanding
PDF
Approach and Philosophy of On baking technology
PDF
Per capita expenditure prediction using model stacking based on satellite ima...
PDF
How UI/UX Design Impacts User Retention in Mobile Apps.pdf
PPTX
Understanding_Digital_Forensics_Presentation.pptx
PDF
Chapter 3 Spatial Domain Image Processing.pdf
DOCX
The AUB Centre for AI in Media Proposal.docx
PDF
Spectral efficient network and resource selection model in 5G networks
PPTX
Effective Security Operations Center (SOC) A Modern, Strategic, and Threat-In...
PDF
Electronic commerce courselecture one. Pdf
PDF
Blue Purple Modern Animated Computer Science Presentation.pdf.pdf
PDF
Mobile App Security Testing_ A Comprehensive Guide.pdf
PPT
“AI and Expert System Decision Support & Business Intelligence Systems”
PDF
TokAI - TikTok AI Agent : The First AI Application That Analyzes 10,000+ Vira...
PPT
Teaching material agriculture food technology
Cloud computing and distributed systems.
The Rise and Fall of 3GPP – Time for a Sabbatical?
Detection-First SIEM: Rule Types, Dashboards, and Threat-Informed Strategy
Digital-Transformation-Roadmap-for-Companies.pptx
Agricultural_Statistics_at_a_Glance_2022_0.pdf
Empathic Computing: Creating Shared Understanding
Approach and Philosophy of On baking technology
Per capita expenditure prediction using model stacking based on satellite ima...
How UI/UX Design Impacts User Retention in Mobile Apps.pdf
Understanding_Digital_Forensics_Presentation.pptx
Chapter 3 Spatial Domain Image Processing.pdf
The AUB Centre for AI in Media Proposal.docx
Spectral efficient network and resource selection model in 5G networks
Effective Security Operations Center (SOC) A Modern, Strategic, and Threat-In...
Electronic commerce courselecture one. Pdf
Blue Purple Modern Animated Computer Science Presentation.pdf.pdf
Mobile App Security Testing_ A Comprehensive Guide.pdf
“AI and Expert System Decision Support & Business Intelligence Systems”
TokAI - TikTok AI Agent : The First AI Application That Analyzes 10,000+ Vira...
Teaching material agriculture food technology

SCM Dashboard

  • 1. SCM Dashboard Monitoring Code Velocity at the Product / Project / Branch level Prakash Ranade
  • 2. AGENDA •  What is SCM Dashboard? •  Why is SCM Dashboard needed? •  Where is it used? •  How does it look? •  Challenges in building SCM Dashboard •  Goals in designing SCM Dashboard •  Technology in building SCM Dashboard •  Conclusion
  • 3. What is SCM Dashboard? •  A framework for organizing, automating, and analyzing software configuration methodologies, metrics, processes, and systems that drive product release performance. •  The Dashboard gathers, organizes, and stores information from various internal data sources and displays metrics that are the result of simple or complex calculations with minimal processing time. •  Decision support system that provides historical data and current trends in its portlet region, showing metrics/reports side- by-side on the same web page.
  • 4. Why is SCM Dashboard needed? You are not able to manage what you can not measure. •  The Dashboard is an easy way to enhance visibility on the product releases, such as showing how you do compared to previous performances, goals and benchmarks. What gets watched, will get done. •  Ability to make more informed decisions based on multiple reports. Not only for the executives, but for all levels of engineering. •  Release Manager, Director •  Development, QA Manager, •  Developer, QA
  • 5. Who needs metrics? Type of files, lines, Dev change, file churn Bug fixes, # changes, Dev QA depot churn Manager Bug trends, Perforce QA Trends Manager SCM Director Dashboard Team Bug fixes, branch stability reports
  • 6. How does it look?
  • 7. How does it look?
  • 8. Data challenges SB, TB, OB Multiple Build Systems Environments Has gone through multiple transformations. No Complex initial values were recorded. Some fields Bugzilla data have multiple values. Above 3 million changes, more than 5000 Large Perforce branches, and an archive consisting of 2 TB Repository data.
  • 9. Dashboard Goals Speed Sharing Portal •  Max. 5 seconds •  Social Engineering •  Ability to configure response time for the •  Easy to share charts multiple metrics on a requests single page. and reports among •  Provides frequent, or team members •  Ability to fine tune at least daily, updates settings and filters on •  Easy to make project •  Bases project status charts and reports. dashboards on incremental data •  Ability to drill downs updates and form aggregations.
  • 11. An Architecture based on Hadoop and MongoDB •  Hadoop is a open-source software used for breaking a big job into smaller tasks, performing each task and collecting the results. •  MapReduce is a programming model for data processing, working by breaking the processing into two phases, a map phase and a reduce phase. •  Hadoop streaming is a utility that comes with the distribution, allowing you to create and run MapReduce jobs in Python. •  The HDFS is a filesystem that stores large files across multiple machines and achieves reliability by replicating the data across multiple hosts. •  MongoDB is a document based database system. Each document can be thought of as a large hash object. There are keys(columns) with values which can be anything such as hashes, arrays, numbers, serialized objects, etc.
  • 12. Perforce Branch: Our Perforce branch exists on multiple perforce servers. Our branch specification looks like this. •  server1:1666 //depot/<component>/<old-branch>/… //depot/<component>/<new-branch>/… •  server2:1666 //depot/<component2>/<old-branch>/… //depot/<component2>/<new-branch>/… //depot/<component3>/<old-branch>/… //depot/<component3>/<new-branch>/… •  server3:1666 //depot/<component4>/<old-branch>/… //depot/<component4>/<new-branch>/…
  • 13. Branch policies •  Branch Manager identifies and lists new feature/bugs, improvements in Bugzilla and Perforce BMPS, and then sets the check-in policies on the branch and change specification forms. Change 1359870 by pranade@pranade-prism1 on 2011/04/27 17:31:36 Implement Prism View... QA Notes: Testing Done: Perforce Create, Update, delete view Bug Number: 703648, 703649 Approved by: daf Reviewed by: gaddamk, akalaveshi Review URL: https://reviewboard.eng.vmware.com/r/227466/ #You may set automerge requests to YES|NO|MANUAL below, #with at most one being set to YES. Merge to: MAIN: YES Merge to: Release: NO Affected files ... ... //depot/component-1/branch-1/views.py#12 edit ... //depot/component-1/branch-1/templates/vcs/perforce.html#15 edit ... //depot/component-1/branch-1/tests.py#1 add ... //depot/component-1/branch-1/utils.py#14 delete Differences ...
  • 14. Perforce Data collection •  “p4 describe” displays the details of the changeset, as follows: The changelist number The changelist creator name and workspace name The date when the changelist created The changelist’s description The submitted file lists and the code diffs •  We have a Perforce data dumper script which connect to perforce servers and dumps the “p4 describe” output of the submitted changelist. •  The Perforce data dumper script dumps output in 64 MB file chunks, which are then copied to HDFS.
  • 15. MapReduce •  We have a Perforce data dumper script which connect to perforce servers and dumps the “p4 describe” output of the submitted changelist. Each MapReduce script scans all the information from a “p4 describe” output. The following reports can be created by writing different MapReduce scripts: Number of submitted changes per depot path File information like add, edit, integrate, branch, delete File types such as “c”, “py”, “pl”, “java”, etc. Number of lines added, removed, modified Most revised files and least revised files Bug number and bug status Reviewers and test case information Change submitter names and group mapping Depot path and branch spec mapping
  • 16. Python MapReduce •  MapReduce programs are much easier to develop in a scripting language using the Streaming API tool. Hadoop MapReduce provides automatic parallelization and distribution, fault-tolerance, and status and monitoring tools. •  Hadoop Streaming interacts with programs that use the Unix streaming paradigm. Inputs come in through STDIN and outputs go to STDOUT. The data has to be text based and each line is considered a record. The overall data flow in Hadoop streaming is like a pipe where data streams in through the mapper and the sorted output streams out through the reducer. In pseudo-code using Unix’s command line notation, it comes up as the following: cat [input_file] | [mapper] | sort | [reducer] > [output_file]
  • 17. Process Combined  p4   describe  output   from  all  servers   in  64MBchunks Hadoop   Parallelism   And  HDFS map Schemaless,   Split files Document   of p4 Storage   describe map System reduce part-­‐01 64 MB file size map reduce part-­‐02 Changes,Lines, Files,Users,   churn   map metadata reduce part-­‐03 Split files map • changes • p4 server A • lines p4 hadoop • MapReduce mongoDB describe • p4 server B • files • MapReduce • p4 server C • users
  • 18. def dump_to_reducer(srvr, chng, depotfiles): def main(): if srvr and depotfiles and chng: depot2count = {} for filename in depotfiles: final_changes = {} print "%s|%st%s" % (srvr, filename, str(chng)) for line in sys.stdin: try: def main(): p4srvr_depotpath, date_chng = line.split('t',1) chng, depot_files, l = 0, set(), os.linesep except: p4srvr = site_perforce_servers(site.perforce_servers) continue for line in sys.stdin: if (not p4srvr_depotpath) and (not date_chng): line = line.rstrip(l) print >> sys.stderr, line if line and line.count('/')==80: continue srvr = match_begin_line(line, p4srvr) dt, change = date_chng.split('.') if srvr: change = change.rstrip(l) chng, depot_files = 0, set() depot_hash = depot2count.setdefault continue (p4srvr_depotpath,{}) if line and line.count('%')==80: depot_hash.setdefault(dt,0) srvr = match_end_line(line, p4srvr) chng_set = depot2count[p4srvr_depotpath][dt] if srvr: depot2count[p4srvr_depotpath][dt] = int(change) dump_to_reducer(srvr, chng, depot_files) for (p4srvr_depotpath, dt) in depot2count.items(): continue for (dt, chngset) in dt.items(): if line and line[0:7]=='Change ': print json.dumps chng = dtgrep(line) ({'p4srvr_depotpath':p4srvr_depotpath, 'date': dt, continue 'changes': chngset}) if line and line[0:6]=='... //': flgrep(line, depot_files) Python Python Mapper script Reducer script
  • 19. mdb = mongo_utils.Vcs_Stats(collection_name="depot_churn") mdb.collection.create_index([('p4srvr_depotpath', pymongo.ASCENDING), ('date', pymongo.ASCENDING)]) for line in datafile.readlines(): data = json.loads(line) p4srvr_depotpath = "%s" % data['p4srvr_depotpath'] dstr = data['date'] yy, mm, dd, hh, MM, ss = dstr[0:4], dstr[4:6], dstr[6:8], dstr[8:10], dstr[10:12], dstr [12:14] changes = data['changes'] new_data = [] mongo_data = {'p4srvr_depotpath':p4srvr_depotpath, 'date‘:datetime.datetime(yy,mm,dd,hh,MM,ss), 'changes':changes, '_id':"%s/%s:%s"% (p4srvr_depotpath,dstr,changes)} mdb.collection.insert(mongo_data) mdb.collection.ensure_index([('p4srvr_depotpath', pymongo.ASCENDING), ('date', pymongo.ASCENDING)]) mongodb upload script
  • 20. /* 0 */ { "_id": "perforce-server1:1666|//depot/component-1/branch-1/20110204005204:1290141", "date": "Thu, 03 Feb 2011 16:52:04 GMT -08:00", "p4srvr_depotpath": "perforce-server1:1666|//depot/component-1/esx41p01-hp4/", "changes": 1290141, "user": "pranade", "total_dict": { "all": "9", "branch": "9" } } /* 1 */ { "_id": "perforce-server1:1666|//depot/component-2/branch-2/20100407144638:1029666", "date": "Wed, 07 Apr 2010 07:46:38 GMT -07:00", "p4srvr_depotpath": "perforce-server1:1666|//depot/component-2/branch-2/", "changes": 1029666, "user": "akalaveshi", "total_dict": { "edit": "3", "all": "3" } } /* 2 */ { "_id": "perforce-server1:1666|//depot/component-2/branch-2/20100106003808:976075", "date": "Tue, 05 Jan 2010 16:38:08 GMT -08:00", "p4srvr_depotpath": "perforce-server1:1666|//depot/component-2/branch-2/", "changes": 976075, "user": "pranade", "total_dict": { "integrate": "10", "edit": "2", "all": "12" } mongodb data }
  • 21. Conclusion •  We have designed a framework called SCM Dashboard. •  “p4 describe” command contains most of the information. •  Hadoop: horizontally scalable computational solution. Streaming makes MapReduce programming easy. •  Mongodb: Document model, dynamic queries, comprehensive data models.