SlideShare a Scribd company logo
How We Size the Academic Suite: Benchmarking at Blackboard  TM Speaker: Steve Feldman Director, Software Performance Engineering and Architecture [email_address]
Agenda and Introductions Goals, Objectives and Outcomes Introduction and Methodology Results and Findings Working with the Sizing Guide References and Resources Total Time: 50 Minutes
Presentation Goals The goals of this presentation are: Explain the pre-process activities for preparing to execute a benchmark against the Blackboard Academic Suite. Present the results and findings from our most recent benchmark activities. Review how we size the Blackboard Academic Suite from these benchmark exercises.
Presentation Objectives Define the study of behavior modeling. Define the study of cognitive modeling.  Define the study of data modeling. Introduce the concept of adoption profiling. Share the benchmark objectives and associated test cases. Present a case for using sessions per hour over concurrency as an acceptable performance metric. Review the differences between cost performance and high performance. Discuss techniques for monitoring and measuring workload and growth. Provide guidance around storage purchasing Provide guidance around load-balancer purchasing
Presentation Outcomes At the end of the session, administrators will be able to do the following: Put a plan together to determine current and future adoption profiles. Use the current sizing specification for upcoming hardware expenditures. Make recommendations back to the Blackboard Performance Engineering team for more effective information sharing.
Part 1: Introduction and Methodology
The Performance Lifecycle Complete End to End  Performance Engineering Refactoring and Optimizing End to End Performance Testing Modeling, Profiling and Simulation Data Collection & Usage Analysis Strategy,   Methodology   and   Best   Practices SPE Methodology
A First Look at SPE The Blackboard Performance Engineering follows a strict methodology based on the principles of Software Performance Engineering (SPE). Assess Performance Risk Identify Critical Use Cases Select Key Performance Scenarios Establish Performance Objectives Construct Performance Models Determine Software Resource Requirements Determine System Resource Requirements SPE is a methodology introduced by Dr. Connie Smith and Dr. Lloyd Williams http://www.perfeng.com Performance Solutions: A Practical Guide to Creating Responsive, Scalable Software
Behavior Modeling Behavior modeling is the study of user behavior within a system to determine workload and use case interaction. Develop Markovian Models to determine probability of the following: Use case interaction Transactional execution Session lengths Samples are taken based on the following: Institutional type and profile K-12 Higher Education: Small, Medium and Large (Private/Public) Consortium Corporate and Government License Range: Basic, LS Only and Full Academic Suite Periods of seasonality Pre-Semester Enrollment General Exams Post-Semester
Cognitive Modeling Cognitive modeling is the psychological study of systematic human behavior within a system to determine patterns of abandonment and adoption. Abandonment: Concept for explaining the patience of a given user and their willingness to wait for system responsiveness. Utility: Use cases can be sub-classed and organized based on importance. Uniform: Use cases are equally weighted. Adoption: Concept for explaining increased frequency of use and reliance on a given system.
Data Modeling Data modeling is the study of linear and latitudinal volumetric growth of data in a system. Linear growth refers to vertical based growth in the form of increased record counts. Factors Affecting Linear Growth Increased adoption Data management strategy (Need for Pruning and Archiving) Latitudinal growth refers to horizontal growth in form increased complexity and maturity of data. Factors Affecting Latitudinal Growth Increased adoption Maturity of processes Samples are taken bi-annually from all willing clients.
Establish Performance Objectives Regression Comparisons Critical client facing impacts Vendor sponsor requirements  Implications of new features and sub-systems Technology Ports: Software-based Platform Changes: OEM components OEM tuning parameters Key Stakeholder Requirements Prototypes for system configuration changes Other…Client Requests
Part 2: Academic Suite Benchmark Review
Release 7.X Performance Objectives Performance Objective #1: Version 6.3 to 7.X Unicode conversion operational downtime minimization. Small data models: Minutes Moderate data models: Hours Large data models: Under 3 days Performance Objective #2: Regression performance from 6.3 to 7.X cannot degrade more then 5%, but rather should improve by 5% without configuration (hardware/software) manipulation. Performance Objective #3: Complex domain analysis. Need to change data model to always support complex domains. Performance Objective #4: Technology port of Perl to Java for the Discussion Board sub-system. Business case for final technology port.
Release 7.X Performance Objectives Performance Objective #5: Intel Multi-Core Analysis Vendor donations and expected coverage in hardware guide. Performance Objective #6: Dell Blade Technology Vendor donations and expected coverage in hardware guide. Performance Objective #7: Sun Multi-Core Analysis Vendor donations and expected coverage in hardware guide. Performance Objective #8: Sun Cost Performance and High Performance Server Comparison Vendor lab time and expected coverage in hardware guide.
Release 7.X Performance Objectives Performance Objective #9: Tomcat Clustering Exploratory analysis for technical feature change Performance Objective #10: Networked Attached Storage for Databases and Database Server Binaries. ASP request for cost efficient operational management Performance Objective #11: Windows Content Load-Balancing Solutions Vendor request for technology change Risk mitigation strategy. Performance Objective #12: Persistence Cache (OSCache) Configuration. Exploratory analysis for configuration guidance
Performance Scenarios Identical workload to the under-loaded Learning System with Community System model, but with the definition of 50 complex domain relationships. Response times calibrated to under-loaded system comparison (~5s.) Calibrated Academic Suite with Complex Domains Combination of Learning System and Community System use case interactions with 40% of the workload in a controlled Assessment Concurrency Problem. Response times calibrated to under-loaded system comparison (~5s.) Calibrated Learning System and Community System with Concurrency Model for Assessments Combination of Learning System, Community System and Content System use case interactions to reflect the budding adoption of the full Academic Suite. Response times calibrated to under-loaded system comparison (~5s.) Calibrated Academic Suite Regression test case from 6.3 performing a mix of student viewing/activity, instructor authoring and minimal administrator management. Meant to be an over-loaded system. Response times < 15s. Over-Loaded Learning System and Community System Regression test case from 6.3 performing a mix of student viewing/activity, instructor authoring and minimal administrator management. Meant to be an  calibrated loaded system. Response times < 10s. Calibrated Learning System and Community System Regression test case from 6.3 performing a mix of student viewing/activity, instructor authoring and minimal administrator management. Meant to be an under-loaded system. Response times < 5s. Under-Loaded Learning System and Community System Summary/Description Workload
Performance Scenarios All scenarios are targeted for single, dual and triple workload evaluations. Load-balanced servers (1 to N Servers…Typically 3) Tomcat clusters (1 to N Nodes) Performance calibration at the Application Server level is main focus Pre-defined Application Configuration Calibrated for response time acceptance Under-Loaded: sub 5 second (~ under 1 second) Calibrated: sub 10 second (~ under 2 seconds) Over-Loaded: sub 15 second  (~ under 10 seconds)
Performance Objective #1 58% 2650 minutes (4 threads) 6360 minutes Large Institution (Sun Microsystems) 58% 130 minutes (4 threads) 309 minutes Moderate Institution (Sun Microsystems) 36% 16 minutes (4 threads) 25 minutes Small Institution (Sun Microsystems) Improvement Benchmark #2 (Min. Threads) Benchmark #1 Model Name NA Not Valid 989 minutes Large Institution (Windows) 40% 2120 minutes 5389 minutes Large Institution (Linux) NA Not Valid 196 minutes Small Institution (Windows) 37% 107 minutes (4 threads) 288 minutes Moderate Institution (Linux) NA Not Valid 9 minutes Small Institution (Windows) 12 minutes (4 threads) 21 minutes Small Institution (Linux) Improvement Benchmark #2 (Min. Threads) Benchmark #1 Model Name
Performance Objective #2
Performance Objective #3
Performance Objectives #7 and #9
Performance Objective #9
Performance Objective #7 20343 Sessions/HR 51 UPL/Second 1,145,440 Bytes/Second 145,287 Transactions 14913 Sessions/HR 33 UPL/Second 695,319 Bytes/Second 94,181 Transactions 8212 Sessions/HR 22 UPL/Second 488,168 Bytes/Second 54,049 Transactions R7.1 High-Level R3 (Workload of 360 Possible Concurrent Simulations Learning System/Community System) R2 (Workload of 240 Possible Concurrent Simulations Learning System/Community System) R1 (Workload of 120 Possible Concurrent Simulations Learning System/Community System) Workload 24034 Sessions/HR 65 UPL/Second 1,329,037 Bytes/Second 157,629 Transactions (6-Nodes) 18455 Sessions/HR 50 UPL/Second 1,102,667 Bytes/Second 130,811 Transactions 17288 Sessions/HR 42 UPL/Second 901,103 Bytes/Second 118,754 Transactions 16063  Sessions/HR 45 UPL/Second 968,128 Bytes/Second 106,659 Transactions (4-Nodes) 13341 Sessions/HR 34 UPL/Second 729,616 Bytes/Second 90,353 Transactions 12459 Sessions/HR 31 UPL/Second 640,958 Bytes/Second 87,433 Transactions 10455 Sessions/HR 25 UPL/Second 544,673 Bytes/Second 59,239 Transactions (2 Nodes) 8080 Sessions/HR 22 UPL/Second 480,824 Bytes/Second 53,780 Transactions 7238 Sessions/Hr 19 UPL/Second 311,656 Bytes/Second 51,888 Transactions R7.1 HL Clustered R7.1 Mid-Level R7.1 Entry-Level
Performance Objective #7 (Cont.) 20207 Sessions/Hr 47 UPL/Second 1,014,189 Bytes/Sec 130,907 Transactions (3 Nodes) 14668 Sessions/Hr 32 UPL/Second 676,802 Bytes/Second 96,742 Transactions (2 Nodes) 12974 Sessions/Hr 35 UPL/Second 735,846 Bytes/Second 84,970 Transactions R7.1 High-Level R9 (Workload of 600 Possible Concurrent Simulations Full Academic Suite) R8 (Workload of 400 Possible Concurrent Simulations Full Academic Suite) R7 (Workload of 200 Possible Concurrent Simulations Full Academic Suite) Workload 27997 Sessions/Hr 71 UPL/Second 1,527,433 Bytes/Sec 181,121 Transactions (6 Nodes) 23056 Sessions/Hr 63 UPL/Second 1,196,553 Bytes/Sec 149,709 Transactions 12652 Sessions/Hr 25 UPL/Second 451,975  Bytes/Second 64,289 Transactions 24034 Sessions/Hr 65 UPL/Second 1,392,037 Bytes/Sec 157,629 Transactions (4 Nodes) 18857 Sessions/Hr 53 UPL/Second 1,157,486 Bytes/Second 118,353 Transactions 11908 Sessions/Hr 34 UPL/Second 668,189  Bytes/Second 77,553 Transactions 13804 Sessions/HR 36 UPL/Second 763,955 Bytes/Second 90,941 Transactions (4 Nodes) 12548 Sessions/Hr 33 UPL/Second 728,082 Bytes/Second 84,004 Transactions 5721 Sessions/Hr 13 UPL/Second 275,672 Bytes/Second 37,313 Transactions R7.1 HL Clustered R7.1 Mid-Level R7.1 Entry-Level
Performance Objectives #5, 6, 7, 8 and 10
Part 3: Working with the Sizing Guide
Determining My Adoption Profile Read the Blackboard Capacity Planning Guide Capacity Planning Factors: A Must Read! Determine your current Performance Maturity Model Position Put together a business plan with functional and technical stakeholders Identify adoption goals for coming calendar year. Identify application initiatives (if they don’t know, have them pursue) Feature Roll-Out Sub-System Enablement Change in adoption patterns Draft Service Level Agreements (SLAs) Rank use cases in the system Set a goal for your future Performance Maturity Model Position Probe your end-users and audience for acceptable response times.
Determining My Adoption Profile (cont.) Look into the past to see the future Analyze data available and digestible Application Log Files for usage by sub-system (trend analysis) Application Log Files for site statistics: hits, files, pages, visits Application Log Files or Network Statistics for bandwidth and utilization Database Growth Statistics and Storage Usage Within Blackboard Write simple database functions to determine linear and latitudinal state of data If you have historical back-ups, restore and compare against present state of data Study critical use cases for behavior characteristics Work together with the greater Blackboard Community! Evaluate Enterprise Monitoring and Measurement Tools Coradiant: True Turn-Key Solution Enlist the Statistics or Computer Science Department for support with this analysis. Analysis should be done seasonally System vitals should be reviewed weekly and monthly
Light Adoption Profile Support peak workload of 1,000 to 20,000 sessions per hour based on configuration. Roughly 10 to 60 Unique Page Loads per Second Average page or download size of 100kb Often used as an external complimentary aid to the class. Low adoption institutionally 15 to 35% of active courses/sections take advantage Limited functionality Mostly content sharing Little collaboration Over-Loaded to Calibrated Configurations Used
Moderate Adoption Profile Support peak workload of 10,000 to 30,000 sessions per hour based on configuration. Roughly 30 to 90 Unique Page Loads per Second Average page or download size of 500kb Critical application on campus behind e-mail. Moderate adoption institutionally Highest among students 35 to 50% of active courses/sections take advantage Extensive functionality Advanced content sharing Collaboration In-Class models for assessment and content delivery Ideal target for Calibrated Environment
Heavy Adoption Profile Support peak workload of 30,000 to 50,000 sessions per hour based on configuration. Greater then 100 Unique Page Loads per Second Average page or download size of 100kb to 1mb Workload rivals some of the largest commerce sites. Heavy adoption institutionally Institutional initiative to leverage Blackboard Extensive functionality Advanced content sharing Heavy integration and Building Blocks Extreme collaboration In-Class models for assessment and content delivery Optimal for Under-Loaded Configuration
Choosing the Right Hardware Cost Performance Model Cost-Conscious Institutions Calibrated to 10 second abandonment policy Mostly Level 1 and 2 Performance Maturity Model Institutions High Performance Model Performance over cost (Institutional Goals for Adoption) Calibrated to 5 second abandonment policy Mostly Level 3 through 5 Performance Maturity Model Institutions
Reading Each Profile Workload Characteristics Sessions Per Hour: Concurrency not a valid identifier, either is FTE counts. Unique Page Loads Per Second: Complimentary metric based on concurrent workload, not users. Homogenous configurations presented, but more a shift to heterogeneous configurations. Web/Application Tier 1, 2 and 4 Socket systems presented based on CPU clock speed, RAM and server counts. Database Tier 1, 2, 4 and 8 Socket systems presented based on CPU clock speed, RAM and server counts. Real Application Clusters offered
Light Adoption Profile: Cost Performance Resembles Calibrated to Over-Loaded Performance Configuration Requires a distributed configuration from start Application and database system Recommend Load-Balancing from start Blackboard scales best horizontally (consider clusters as well) Each application server will support 5,000 to 7,000 unique sessions in an hour.  Blade or Pizza Box Model Most Efficient 10,000 sessions per hour ≠ 10,000 users logged in Based on a queuing model in which about 250 unique users are authenticated at any time Each session is roughly 90 seconds in length Disposable, trivial use cases Systems utilized no greater then 65% at the application tier during peak workload and 30% at the database tier.
Light Adoption Profile: High Performance Resembles Calibrated Performance Configuration Requires a distributed configuration from start Application and database system Requires Load-Balancing from start Blackboard scales best horizontally (consider clusters as well) Best performance when each application server supports 7,000 sessions or less each. Good Candidate for Clustering Blade or Pizza Box Model Most Efficient 20,000 sessions per hour ≠ 20,000 users logged in Based on a queuing model in which about 500 unique users are authenticated at any time Distributed workload against load-balanced configuration Each session is roughly 90 seconds in length Disposable, trivial use cases Systems utilized no greater then 65% at the application tier during peak workload and 30% at the database tier.
Moderate Adoption Profile: Cost Performance Resembles Calibrated Performance Configuration Requires a distributed configuration from start Application and database system Requires Load-Balancing from start Blackboard scales best horizontally (consider clusters as well) Blade or Pizza Box Model Most Efficient Quad Server Model can be as effective Multi-Core Technologies just as effective Good candidate for Clustering 20,000 sessions per hour ≠ 20,000 users logged in More complex use cases Maturity in how the product is being used Robust execution models: concurrency and queuing models Consider RAC for Cost Performance Alternative over Large Monolithic Deployment NAS based storage just as effective and easy to manage. Consider the investment now when it is manageable.
Moderate Adoption Profile: High Performance Resembles Calibrated to Under-Loaded Performance Configuration Requires a distributed configuration from start Application and database system Requires a Load-Balancing from start Blackboard scales best horizontally Clustering will assist Systems are not saturated or even close to being saturated. Consistent utilization greater then 65% is the limit. Quad Socket with multi-core DB is optimal if not larger Still a good candidate for RAC Consider RAC as early as possible If not RAC, scale-up at the database tier. 30,000 sessions per hour ≠ 30,000 users logged in Each session is roughly 300 seconds in length
Heavy Adoption Profile: Cost Performance Resembles Calibrated to Under-Loaded Configuration Requires a distributed configuration from start Application and database system Requires Load-Balancing from start Blackboard scales best horizontally (consider clusters as well) 30,000 sessions per hour ≠ 30,000 users logged in Each application server will support 7,000 sessions per hour RAC or Scale-Up Model on the Database Windows clients need to make a decision around database support for scalability Database capable of supporting 50,000, but best performance when only supporting 20k
Heavy Adoption Profile: High Performance Resembles Under-Loaded Configuration Requires a distributed configuration from start Application and database system Requires a Load-Balancing from start Blackboard scales best horizontally (consider clusters as well) 50,000 sessions per hour ≠ 50,000 users logged in RAC or Scale-Up Model
Sizing Storage Determine rate of growth of key tables QTI Tables (ASI and Result) Portal Extra Info Discussion Board Course Content Users and Course Users Activity Accumulator Determine rate of growth from a data file perspective in the database. Monthly projections    Goal of determining patterns Determine rate of growth from a file system perspective. Monthly projections    Goal of determining patterns Nail down a strategy for data retention and archival. Research the following: http://www.spec.org/sfs97r1/results/sfs97r1.html http://www.storageperformance.org
Storage 3:1 Greater then 1 TB 600,000 500,000 4:1 800 GB 300,000 50,000 5:1 200 GB 50,000 5,000 10:1 20 GB 7,000 500 Ratio of File System to Database Storage File System Size Number of Existing Users Number of Existing Courses
Load-Balancer Support The guide is fairly agnostic so as long as the load-balanced device supports session affinity. Blackboard as an organization advises on two vendors in particular: NetScaler (Used In ASP) Juniper Networks (Used in Product Development) F5 Big IP (Formally used in ASP)
Part 4: References and Resources
References Blackboard Academic Suite Hardware Sizing Guide  (Behind the Blackboard) Performance and Capacity Planning Guidelines for the Blackboard Academic Suite  (Behind the Blackboard) http://www.perfeng.com http://www.spec.org/sfs97r1/results/sfs97r1.html   http://www.storageperformance.org http://www.coradiant.com http://www.quest.com http://www.bmc.com
Past Presentations of Note B2 2006:  How We Size the Academic Suite, Benchmarking at Blackboard B2 2006:  Deploying Tomcat Clusters in an Advanced Blackboard Environment   2006 BbWorld Presentation:  Practical Guide to  Performance Tuning and Scaling  (2 Hour Workshop) B2 2005:  Introduction to Load Testing, A Blackboard Primer B2 2005:  Performance Testing Building Blocks Users Conference 2005:  Managing Your Blackboard Deployment for Growth and Performance Users Conference 2005:  Applied Software Performance Engineering B2 2004:  Introduction to Software Performance Engineering B2 2004:  Profiling Building Blocks for Performance Analysis
Questions?

More Related Content

PPT
Introduction to Software Engineering SE1
PPT
Software Change in Software Engineering SE27
PPT
Process Improvement in Software Engineering SE25
PPSX
COCOMO Model For Effort Estimation
PPT
Software Prototyping in Software Engineering SE8
PDF
02 software component selection
PPTX
PPTX
Software engineering
Introduction to Software Engineering SE1
Software Change in Software Engineering SE27
Process Improvement in Software Engineering SE25
COCOMO Model For Effort Estimation
Software Prototyping in Software Engineering SE8
02 software component selection
Software engineering

What's hot (18)

PPT
Real Time Software Design in Software Engineering SE13
PDF
System requirements engineering
PDF
Enterprise performance engineering solutions
PDF
Investigation of quality and functional risk
PPTX
Software engineering
PPTX
Software evolution and maintenance basic concepts and preliminaries
PDF
Chang Haiyue' Resume
PDF
Model-Based Performance Prediction in Software Development: A Survey
PPT
Chapter02
PPT
Ch 11-component-level-design
PPT
Critical System Validation in Software Engineering SE21
PPTX
Software Reengineering
PPT
Bse 3105 lecture 4-software re-engineering
PPT
Legacy Systems in Software Engineering SE26
PPT
24. Advanced Transaction Processing in DBMS
PPT
Sonic 7 Hentchel Performance Tuning
 
PPT
15 object orienteddesign
PPT
Fussell.louis
Real Time Software Design in Software Engineering SE13
System requirements engineering
Enterprise performance engineering solutions
Investigation of quality and functional risk
Software engineering
Software evolution and maintenance basic concepts and preliminaries
Chang Haiyue' Resume
Model-Based Performance Prediction in Software Development: A Survey
Chapter02
Ch 11-component-level-design
Critical System Validation in Software Engineering SE21
Software Reengineering
Bse 3105 lecture 4-software re-engineering
Legacy Systems in Software Engineering SE26
24. Advanced Transaction Processing in DBMS
Sonic 7 Hentchel Performance Tuning
 
15 object orienteddesign
Fussell.louis
Ad

Similar to B2 2006 sizing_benchmarking (20)

PPT
Sfeldman performance bb_worldemea07
PDF
IRJET- Machine Learning Techniques for Code Optimization
PPT
Software Estimation Techniques
PPT
The Challenges Of, And Advantages In, Establishing A Consistent Architectural...
PPT
James hall ch 14
PDF
AFITC 2018 - Using Process Maturity and Agile to Strengthen Cyber Security
PPT
T3 Consortium's Performance Center of Excellence
PPT
Best Practices In Load And Stress Testing Cmg Seminar[1]
PPTX
AN INTRODUCTION TO SYSTEM ANALYSIS OVERVIEW.pptx
PPT
1 Ads
PPT
Automated Discovery of Performance Regressions in Enterprise Applications
PDF
How to re-use existing system models to generate test cases
PDF
IRJET- Web-Based System for Creation and Management of Multiple Choices based...
PPT
Downloads abc 2006 presentation downloads-ramesh_babu
PDF
Software Testing Principles and  Techniques
PPT
B2 2005 introduction_load_testing_blackboard_primer_draft
PDF
FlorenceAI: Reinventing Data Science at Humana
PDF
Reshma Resume 2016
PDF
IRJET- Testing Improvement in Business Intelligence Area
PPT
Short reference architecture
Sfeldman performance bb_worldemea07
IRJET- Machine Learning Techniques for Code Optimization
Software Estimation Techniques
The Challenges Of, And Advantages In, Establishing A Consistent Architectural...
James hall ch 14
AFITC 2018 - Using Process Maturity and Agile to Strengthen Cyber Security
T3 Consortium's Performance Center of Excellence
Best Practices In Load And Stress Testing Cmg Seminar[1]
AN INTRODUCTION TO SYSTEM ANALYSIS OVERVIEW.pptx
1 Ads
Automated Discovery of Performance Regressions in Enterprise Applications
How to re-use existing system models to generate test cases
IRJET- Web-Based System for Creation and Management of Multiple Choices based...
Downloads abc 2006 presentation downloads-ramesh_babu
Software Testing Principles and  Techniques
B2 2005 introduction_load_testing_blackboard_primer_draft
FlorenceAI: Reinventing Data Science at Humana
Reshma Resume 2016
IRJET- Testing Improvement in Business Intelligence Area
Short reference architecture
Ad

More from Steve Feldman (20)

PDF
Day 2 05 - steve feldman - logging matters
PPTX
PTOn...Finding the Time to Dedicate Individual Projects of Passion and Role
PDF
3days september
PDF
Logonomics
PPTX
Cookbook for Administrating Blackboard Learn
PPTX
Emerging technologies
PDF
Bb sql serverdell
PPTX
Bb world 2011 capacity planning
PPTX
Scaling Blackboard Learn™ for High Performance and Delivery
PPTX
So Your Boss Wants You to Performance Test Blackboard
PPT
Sfeldman bbworld 07_going_enterprise (1)
PDF
Dell bb wp_final
PPT
B2 2006 sizing_benchmarking (1)
PDF
Bb performance-engineering-toad
PDF
Bb performance-engineering-spotlight
PDF
Sun blackboardwp10 1_07
PDF
Dell bb quest_wp_jan6
PDF
Hied blackboard dell_whitepaper
PDF
Hied blackboard whitepaper
PDF
B2conference performance 2004
Day 2 05 - steve feldman - logging matters
PTOn...Finding the Time to Dedicate Individual Projects of Passion and Role
3days september
Logonomics
Cookbook for Administrating Blackboard Learn
Emerging technologies
Bb sql serverdell
Bb world 2011 capacity planning
Scaling Blackboard Learn™ for High Performance and Delivery
So Your Boss Wants You to Performance Test Blackboard
Sfeldman bbworld 07_going_enterprise (1)
Dell bb wp_final
B2 2006 sizing_benchmarking (1)
Bb performance-engineering-toad
Bb performance-engineering-spotlight
Sun blackboardwp10 1_07
Dell bb quest_wp_jan6
Hied blackboard dell_whitepaper
Hied blackboard whitepaper
B2conference performance 2004

B2 2006 sizing_benchmarking

  • 1. How We Size the Academic Suite: Benchmarking at Blackboard TM Speaker: Steve Feldman Director, Software Performance Engineering and Architecture [email_address]
  • 2. Agenda and Introductions Goals, Objectives and Outcomes Introduction and Methodology Results and Findings Working with the Sizing Guide References and Resources Total Time: 50 Minutes
  • 3. Presentation Goals The goals of this presentation are: Explain the pre-process activities for preparing to execute a benchmark against the Blackboard Academic Suite. Present the results and findings from our most recent benchmark activities. Review how we size the Blackboard Academic Suite from these benchmark exercises.
  • 4. Presentation Objectives Define the study of behavior modeling. Define the study of cognitive modeling. Define the study of data modeling. Introduce the concept of adoption profiling. Share the benchmark objectives and associated test cases. Present a case for using sessions per hour over concurrency as an acceptable performance metric. Review the differences between cost performance and high performance. Discuss techniques for monitoring and measuring workload and growth. Provide guidance around storage purchasing Provide guidance around load-balancer purchasing
  • 5. Presentation Outcomes At the end of the session, administrators will be able to do the following: Put a plan together to determine current and future adoption profiles. Use the current sizing specification for upcoming hardware expenditures. Make recommendations back to the Blackboard Performance Engineering team for more effective information sharing.
  • 6. Part 1: Introduction and Methodology
  • 7. The Performance Lifecycle Complete End to End Performance Engineering Refactoring and Optimizing End to End Performance Testing Modeling, Profiling and Simulation Data Collection & Usage Analysis Strategy, Methodology and Best Practices SPE Methodology
  • 8. A First Look at SPE The Blackboard Performance Engineering follows a strict methodology based on the principles of Software Performance Engineering (SPE). Assess Performance Risk Identify Critical Use Cases Select Key Performance Scenarios Establish Performance Objectives Construct Performance Models Determine Software Resource Requirements Determine System Resource Requirements SPE is a methodology introduced by Dr. Connie Smith and Dr. Lloyd Williams http://www.perfeng.com Performance Solutions: A Practical Guide to Creating Responsive, Scalable Software
  • 9. Behavior Modeling Behavior modeling is the study of user behavior within a system to determine workload and use case interaction. Develop Markovian Models to determine probability of the following: Use case interaction Transactional execution Session lengths Samples are taken based on the following: Institutional type and profile K-12 Higher Education: Small, Medium and Large (Private/Public) Consortium Corporate and Government License Range: Basic, LS Only and Full Academic Suite Periods of seasonality Pre-Semester Enrollment General Exams Post-Semester
  • 10. Cognitive Modeling Cognitive modeling is the psychological study of systematic human behavior within a system to determine patterns of abandonment and adoption. Abandonment: Concept for explaining the patience of a given user and their willingness to wait for system responsiveness. Utility: Use cases can be sub-classed and organized based on importance. Uniform: Use cases are equally weighted. Adoption: Concept for explaining increased frequency of use and reliance on a given system.
  • 11. Data Modeling Data modeling is the study of linear and latitudinal volumetric growth of data in a system. Linear growth refers to vertical based growth in the form of increased record counts. Factors Affecting Linear Growth Increased adoption Data management strategy (Need for Pruning and Archiving) Latitudinal growth refers to horizontal growth in form increased complexity and maturity of data. Factors Affecting Latitudinal Growth Increased adoption Maturity of processes Samples are taken bi-annually from all willing clients.
  • 12. Establish Performance Objectives Regression Comparisons Critical client facing impacts Vendor sponsor requirements Implications of new features and sub-systems Technology Ports: Software-based Platform Changes: OEM components OEM tuning parameters Key Stakeholder Requirements Prototypes for system configuration changes Other…Client Requests
  • 13. Part 2: Academic Suite Benchmark Review
  • 14. Release 7.X Performance Objectives Performance Objective #1: Version 6.3 to 7.X Unicode conversion operational downtime minimization. Small data models: Minutes Moderate data models: Hours Large data models: Under 3 days Performance Objective #2: Regression performance from 6.3 to 7.X cannot degrade more then 5%, but rather should improve by 5% without configuration (hardware/software) manipulation. Performance Objective #3: Complex domain analysis. Need to change data model to always support complex domains. Performance Objective #4: Technology port of Perl to Java for the Discussion Board sub-system. Business case for final technology port.
  • 15. Release 7.X Performance Objectives Performance Objective #5: Intel Multi-Core Analysis Vendor donations and expected coverage in hardware guide. Performance Objective #6: Dell Blade Technology Vendor donations and expected coverage in hardware guide. Performance Objective #7: Sun Multi-Core Analysis Vendor donations and expected coverage in hardware guide. Performance Objective #8: Sun Cost Performance and High Performance Server Comparison Vendor lab time and expected coverage in hardware guide.
  • 16. Release 7.X Performance Objectives Performance Objective #9: Tomcat Clustering Exploratory analysis for technical feature change Performance Objective #10: Networked Attached Storage for Databases and Database Server Binaries. ASP request for cost efficient operational management Performance Objective #11: Windows Content Load-Balancing Solutions Vendor request for technology change Risk mitigation strategy. Performance Objective #12: Persistence Cache (OSCache) Configuration. Exploratory analysis for configuration guidance
  • 17. Performance Scenarios Identical workload to the under-loaded Learning System with Community System model, but with the definition of 50 complex domain relationships. Response times calibrated to under-loaded system comparison (~5s.) Calibrated Academic Suite with Complex Domains Combination of Learning System and Community System use case interactions with 40% of the workload in a controlled Assessment Concurrency Problem. Response times calibrated to under-loaded system comparison (~5s.) Calibrated Learning System and Community System with Concurrency Model for Assessments Combination of Learning System, Community System and Content System use case interactions to reflect the budding adoption of the full Academic Suite. Response times calibrated to under-loaded system comparison (~5s.) Calibrated Academic Suite Regression test case from 6.3 performing a mix of student viewing/activity, instructor authoring and minimal administrator management. Meant to be an over-loaded system. Response times < 15s. Over-Loaded Learning System and Community System Regression test case from 6.3 performing a mix of student viewing/activity, instructor authoring and minimal administrator management. Meant to be an calibrated loaded system. Response times < 10s. Calibrated Learning System and Community System Regression test case from 6.3 performing a mix of student viewing/activity, instructor authoring and minimal administrator management. Meant to be an under-loaded system. Response times < 5s. Under-Loaded Learning System and Community System Summary/Description Workload
  • 18. Performance Scenarios All scenarios are targeted for single, dual and triple workload evaluations. Load-balanced servers (1 to N Servers…Typically 3) Tomcat clusters (1 to N Nodes) Performance calibration at the Application Server level is main focus Pre-defined Application Configuration Calibrated for response time acceptance Under-Loaded: sub 5 second (~ under 1 second) Calibrated: sub 10 second (~ under 2 seconds) Over-Loaded: sub 15 second (~ under 10 seconds)
  • 19. Performance Objective #1 58% 2650 minutes (4 threads) 6360 minutes Large Institution (Sun Microsystems) 58% 130 minutes (4 threads) 309 minutes Moderate Institution (Sun Microsystems) 36% 16 minutes (4 threads) 25 minutes Small Institution (Sun Microsystems) Improvement Benchmark #2 (Min. Threads) Benchmark #1 Model Name NA Not Valid 989 minutes Large Institution (Windows) 40% 2120 minutes 5389 minutes Large Institution (Linux) NA Not Valid 196 minutes Small Institution (Windows) 37% 107 minutes (4 threads) 288 minutes Moderate Institution (Linux) NA Not Valid 9 minutes Small Institution (Windows) 12 minutes (4 threads) 21 minutes Small Institution (Linux) Improvement Benchmark #2 (Min. Threads) Benchmark #1 Model Name
  • 24. Performance Objective #7 20343 Sessions/HR 51 UPL/Second 1,145,440 Bytes/Second 145,287 Transactions 14913 Sessions/HR 33 UPL/Second 695,319 Bytes/Second 94,181 Transactions 8212 Sessions/HR 22 UPL/Second 488,168 Bytes/Second 54,049 Transactions R7.1 High-Level R3 (Workload of 360 Possible Concurrent Simulations Learning System/Community System) R2 (Workload of 240 Possible Concurrent Simulations Learning System/Community System) R1 (Workload of 120 Possible Concurrent Simulations Learning System/Community System) Workload 24034 Sessions/HR 65 UPL/Second 1,329,037 Bytes/Second 157,629 Transactions (6-Nodes) 18455 Sessions/HR 50 UPL/Second 1,102,667 Bytes/Second 130,811 Transactions 17288 Sessions/HR 42 UPL/Second 901,103 Bytes/Second 118,754 Transactions 16063 Sessions/HR 45 UPL/Second 968,128 Bytes/Second 106,659 Transactions (4-Nodes) 13341 Sessions/HR 34 UPL/Second 729,616 Bytes/Second 90,353 Transactions 12459 Sessions/HR 31 UPL/Second 640,958 Bytes/Second 87,433 Transactions 10455 Sessions/HR 25 UPL/Second 544,673 Bytes/Second 59,239 Transactions (2 Nodes) 8080 Sessions/HR 22 UPL/Second 480,824 Bytes/Second 53,780 Transactions 7238 Sessions/Hr 19 UPL/Second 311,656 Bytes/Second 51,888 Transactions R7.1 HL Clustered R7.1 Mid-Level R7.1 Entry-Level
  • 25. Performance Objective #7 (Cont.) 20207 Sessions/Hr 47 UPL/Second 1,014,189 Bytes/Sec 130,907 Transactions (3 Nodes) 14668 Sessions/Hr 32 UPL/Second 676,802 Bytes/Second 96,742 Transactions (2 Nodes) 12974 Sessions/Hr 35 UPL/Second 735,846 Bytes/Second 84,970 Transactions R7.1 High-Level R9 (Workload of 600 Possible Concurrent Simulations Full Academic Suite) R8 (Workload of 400 Possible Concurrent Simulations Full Academic Suite) R7 (Workload of 200 Possible Concurrent Simulations Full Academic Suite) Workload 27997 Sessions/Hr 71 UPL/Second 1,527,433 Bytes/Sec 181,121 Transactions (6 Nodes) 23056 Sessions/Hr 63 UPL/Second 1,196,553 Bytes/Sec 149,709 Transactions 12652 Sessions/Hr 25 UPL/Second 451,975 Bytes/Second 64,289 Transactions 24034 Sessions/Hr 65 UPL/Second 1,392,037 Bytes/Sec 157,629 Transactions (4 Nodes) 18857 Sessions/Hr 53 UPL/Second 1,157,486 Bytes/Second 118,353 Transactions 11908 Sessions/Hr 34 UPL/Second 668,189 Bytes/Second 77,553 Transactions 13804 Sessions/HR 36 UPL/Second 763,955 Bytes/Second 90,941 Transactions (4 Nodes) 12548 Sessions/Hr 33 UPL/Second 728,082 Bytes/Second 84,004 Transactions 5721 Sessions/Hr 13 UPL/Second 275,672 Bytes/Second 37,313 Transactions R7.1 HL Clustered R7.1 Mid-Level R7.1 Entry-Level
  • 26. Performance Objectives #5, 6, 7, 8 and 10
  • 27. Part 3: Working with the Sizing Guide
  • 28. Determining My Adoption Profile Read the Blackboard Capacity Planning Guide Capacity Planning Factors: A Must Read! Determine your current Performance Maturity Model Position Put together a business plan with functional and technical stakeholders Identify adoption goals for coming calendar year. Identify application initiatives (if they don’t know, have them pursue) Feature Roll-Out Sub-System Enablement Change in adoption patterns Draft Service Level Agreements (SLAs) Rank use cases in the system Set a goal for your future Performance Maturity Model Position Probe your end-users and audience for acceptable response times.
  • 29. Determining My Adoption Profile (cont.) Look into the past to see the future Analyze data available and digestible Application Log Files for usage by sub-system (trend analysis) Application Log Files for site statistics: hits, files, pages, visits Application Log Files or Network Statistics for bandwidth and utilization Database Growth Statistics and Storage Usage Within Blackboard Write simple database functions to determine linear and latitudinal state of data If you have historical back-ups, restore and compare against present state of data Study critical use cases for behavior characteristics Work together with the greater Blackboard Community! Evaluate Enterprise Monitoring and Measurement Tools Coradiant: True Turn-Key Solution Enlist the Statistics or Computer Science Department for support with this analysis. Analysis should be done seasonally System vitals should be reviewed weekly and monthly
  • 30. Light Adoption Profile Support peak workload of 1,000 to 20,000 sessions per hour based on configuration. Roughly 10 to 60 Unique Page Loads per Second Average page or download size of 100kb Often used as an external complimentary aid to the class. Low adoption institutionally 15 to 35% of active courses/sections take advantage Limited functionality Mostly content sharing Little collaboration Over-Loaded to Calibrated Configurations Used
  • 31. Moderate Adoption Profile Support peak workload of 10,000 to 30,000 sessions per hour based on configuration. Roughly 30 to 90 Unique Page Loads per Second Average page or download size of 500kb Critical application on campus behind e-mail. Moderate adoption institutionally Highest among students 35 to 50% of active courses/sections take advantage Extensive functionality Advanced content sharing Collaboration In-Class models for assessment and content delivery Ideal target for Calibrated Environment
  • 32. Heavy Adoption Profile Support peak workload of 30,000 to 50,000 sessions per hour based on configuration. Greater then 100 Unique Page Loads per Second Average page or download size of 100kb to 1mb Workload rivals some of the largest commerce sites. Heavy adoption institutionally Institutional initiative to leverage Blackboard Extensive functionality Advanced content sharing Heavy integration and Building Blocks Extreme collaboration In-Class models for assessment and content delivery Optimal for Under-Loaded Configuration
  • 33. Choosing the Right Hardware Cost Performance Model Cost-Conscious Institutions Calibrated to 10 second abandonment policy Mostly Level 1 and 2 Performance Maturity Model Institutions High Performance Model Performance over cost (Institutional Goals for Adoption) Calibrated to 5 second abandonment policy Mostly Level 3 through 5 Performance Maturity Model Institutions
  • 34. Reading Each Profile Workload Characteristics Sessions Per Hour: Concurrency not a valid identifier, either is FTE counts. Unique Page Loads Per Second: Complimentary metric based on concurrent workload, not users. Homogenous configurations presented, but more a shift to heterogeneous configurations. Web/Application Tier 1, 2 and 4 Socket systems presented based on CPU clock speed, RAM and server counts. Database Tier 1, 2, 4 and 8 Socket systems presented based on CPU clock speed, RAM and server counts. Real Application Clusters offered
  • 35. Light Adoption Profile: Cost Performance Resembles Calibrated to Over-Loaded Performance Configuration Requires a distributed configuration from start Application and database system Recommend Load-Balancing from start Blackboard scales best horizontally (consider clusters as well) Each application server will support 5,000 to 7,000 unique sessions in an hour. Blade or Pizza Box Model Most Efficient 10,000 sessions per hour ≠ 10,000 users logged in Based on a queuing model in which about 250 unique users are authenticated at any time Each session is roughly 90 seconds in length Disposable, trivial use cases Systems utilized no greater then 65% at the application tier during peak workload and 30% at the database tier.
  • 36. Light Adoption Profile: High Performance Resembles Calibrated Performance Configuration Requires a distributed configuration from start Application and database system Requires Load-Balancing from start Blackboard scales best horizontally (consider clusters as well) Best performance when each application server supports 7,000 sessions or less each. Good Candidate for Clustering Blade or Pizza Box Model Most Efficient 20,000 sessions per hour ≠ 20,000 users logged in Based on a queuing model in which about 500 unique users are authenticated at any time Distributed workload against load-balanced configuration Each session is roughly 90 seconds in length Disposable, trivial use cases Systems utilized no greater then 65% at the application tier during peak workload and 30% at the database tier.
  • 37. Moderate Adoption Profile: Cost Performance Resembles Calibrated Performance Configuration Requires a distributed configuration from start Application and database system Requires Load-Balancing from start Blackboard scales best horizontally (consider clusters as well) Blade or Pizza Box Model Most Efficient Quad Server Model can be as effective Multi-Core Technologies just as effective Good candidate for Clustering 20,000 sessions per hour ≠ 20,000 users logged in More complex use cases Maturity in how the product is being used Robust execution models: concurrency and queuing models Consider RAC for Cost Performance Alternative over Large Monolithic Deployment NAS based storage just as effective and easy to manage. Consider the investment now when it is manageable.
  • 38. Moderate Adoption Profile: High Performance Resembles Calibrated to Under-Loaded Performance Configuration Requires a distributed configuration from start Application and database system Requires a Load-Balancing from start Blackboard scales best horizontally Clustering will assist Systems are not saturated or even close to being saturated. Consistent utilization greater then 65% is the limit. Quad Socket with multi-core DB is optimal if not larger Still a good candidate for RAC Consider RAC as early as possible If not RAC, scale-up at the database tier. 30,000 sessions per hour ≠ 30,000 users logged in Each session is roughly 300 seconds in length
  • 39. Heavy Adoption Profile: Cost Performance Resembles Calibrated to Under-Loaded Configuration Requires a distributed configuration from start Application and database system Requires Load-Balancing from start Blackboard scales best horizontally (consider clusters as well) 30,000 sessions per hour ≠ 30,000 users logged in Each application server will support 7,000 sessions per hour RAC or Scale-Up Model on the Database Windows clients need to make a decision around database support for scalability Database capable of supporting 50,000, but best performance when only supporting 20k
  • 40. Heavy Adoption Profile: High Performance Resembles Under-Loaded Configuration Requires a distributed configuration from start Application and database system Requires a Load-Balancing from start Blackboard scales best horizontally (consider clusters as well) 50,000 sessions per hour ≠ 50,000 users logged in RAC or Scale-Up Model
  • 41. Sizing Storage Determine rate of growth of key tables QTI Tables (ASI and Result) Portal Extra Info Discussion Board Course Content Users and Course Users Activity Accumulator Determine rate of growth from a data file perspective in the database. Monthly projections  Goal of determining patterns Determine rate of growth from a file system perspective. Monthly projections  Goal of determining patterns Nail down a strategy for data retention and archival. Research the following: http://www.spec.org/sfs97r1/results/sfs97r1.html http://www.storageperformance.org
  • 42. Storage 3:1 Greater then 1 TB 600,000 500,000 4:1 800 GB 300,000 50,000 5:1 200 GB 50,000 5,000 10:1 20 GB 7,000 500 Ratio of File System to Database Storage File System Size Number of Existing Users Number of Existing Courses
  • 43. Load-Balancer Support The guide is fairly agnostic so as long as the load-balanced device supports session affinity. Blackboard as an organization advises on two vendors in particular: NetScaler (Used In ASP) Juniper Networks (Used in Product Development) F5 Big IP (Formally used in ASP)
  • 44. Part 4: References and Resources
  • 45. References Blackboard Academic Suite Hardware Sizing Guide (Behind the Blackboard) Performance and Capacity Planning Guidelines for the Blackboard Academic Suite (Behind the Blackboard) http://www.perfeng.com http://www.spec.org/sfs97r1/results/sfs97r1.html http://www.storageperformance.org http://www.coradiant.com http://www.quest.com http://www.bmc.com
  • 46. Past Presentations of Note B2 2006: How We Size the Academic Suite, Benchmarking at Blackboard B2 2006: Deploying Tomcat Clusters in an Advanced Blackboard Environment 2006 BbWorld Presentation: Practical Guide to Performance Tuning and Scaling (2 Hour Workshop) B2 2005: Introduction to Load Testing, A Blackboard Primer B2 2005: Performance Testing Building Blocks Users Conference 2005: Managing Your Blackboard Deployment for Growth and Performance Users Conference 2005: Applied Software Performance Engineering B2 2004: Introduction to Software Performance Engineering B2 2004: Profiling Building Blocks for Performance Analysis