SlideShare a Scribd company logo
Autonomous Resource Provision in
Virtual Data Centers
Presented by: Noha Elprince
noha.elprince@uwaterloo.ca
IFIP/IEEE DANMS, 31 May 2013
Unused resources
2
Demand
Capacity
Time
Resources
Demand
Capacity
Time
Resources
Static Data Centers vs. Dynamic
Figure: RAD Lab, UC Berkeley
•  Fixed pre-assigned Resources
( provision for peak )
•  Static Environment
•  Manual change of configurations
•  Cloud Elasticity ~ “Pay as you go”
•  Virtualized Environment
•  Automated “Self-service” change of
configurations.
3
•  under-provisioning Heavy Penalty
Lost revenue
Lost users
Resources
Demand
Capacity
Time (days)
1 2 3Resources
Demand
Capacity
Time (days)
1 2 3
Resources
Demand
Capacity
Time (days)
1 2 3
3
Figures: RAD Lab, UC Berkeley
Cloud Elasticity problems…
Cloud Elasticity …
4
•  over-provisioning overutilization
Demand
Capacity
Time
Resources
Unused resources
Figure: RAD Lab, UC Berkeley
5
Virtualization (Cloud Foundation)
•  Virtualization allows a
computational resource to be
partitioned on multiple isolated
execution environments (VMs)
•  Turning the machine into a “virtual
image” ~ Self-immunity from:
Ø  Hardware breakdowns.
Ø  Running out of the resources
Challenge: Service Differentiation
Problem
q  Over and under provisioning in spite of:
difficulty of estimating the actual needs due to time-
varying and diverse workload.
q  Enabling service differentiation in a virtualized
environment.
6
Methodology
•  Develop and implement an autonomic resource
management controller that:
Ø  Effectively optimize the resource by predicting current
resource needs.
Ø  Continuous resource self-tuning to accommodate load
variations and enforce service differentiation during resource
allocation.
•  Test the proposed prototype on real traces.
7
Motivation
•  Help datacenters to manage resources effectively.
•  Propagate Cloud Computing (increase cloud users =>
less expensive)
•  Optimize resources (green I.T !)
8
Related Work
v Approaches for autonomic resource management:
–  Utility based self-optimizing approach.
–  Model-based approach based on perf. Modeling.
–  Machine Learning Approach
–  Fuzzy logic approach.
9
Proposed Solution Architecture: Sys. Modeling
10
r(t+1)
I. System Modeling : Data set
11
•  Idea: Learning from successful jobs:
(normal termination, fulfill client’s anticipated perf.)
•  A real computing center trace of Los Alamos National Lab (LANL)
•  LANL is a United States Department of Energy (DOE) national
laboratory.
•  LANL conducts multidisciplinary research in fields such as national
security , space exploration, renewable energy, medicine,
nanotechnology and supercomputing.
•  System: 1024-node Connection Machine CM-5 from Thinking Machines
•  Jobs: 201,387 , Duration: 2 Yrs.
12
v  Feature Selection
•  Use stepwise regression to:
Sort variables out & leave more influential
ones in the model.
•  Results: Out of 18 features, 5 features were selected:
=> run_time, wait_time, Avg_cpu_time, used_mem, status
v  Filter
•  Remove jobs with status =unsuccessful ( failed/ aborted )
•  Discard records that have average_cpu_time_used <=0 and used_mem <=0
v  Data Cleaning
•  Normalize data to remove noise
I. System Modeling : Data Preprocessing
13
I. System Modeling : Statistical Analysis
14
!
Cascaded classifiers (MISO model)
I. System Modeling : Model I/O
15
§  Linear Regression
§  Sugeno Fuzzy Inference System (FCM, SUB)
§  Regression Tree (REP-Tree)
§  Model Tree (M5P)
§  Boosting (Rep-Tree, M5P)
§  Bagging (Rep-Tree, M5P)
I. System Modeling : ML approaches
Why ML ?
- Due to the Non-linear nature of the data
-  Ability to deal with complex nature of data.
-  Detect dependency between i/ps and o/ps efficiently.
16
Bagging vs. Boosting Classifiers
•  Bagging (Bootstrap aggregating)
uses bootstrap sampling.
•  Trains k classifier on each bootstrap
sample.
•  A weighted majority (voting) of the k
learned classifiers (using equal
weights).
•  Boosting: weak classifiers form a final
strong classifier.
•  After a weak learner is added, the data is
reweighted:
Ø  misclassified examples=> gain weight
Ø  examples classified correctly => lose
weight.
•  Thus future learners focus more on the data
that previous weak learners misclassified.
II. Res. Predictor
17
v The client requests hosting a
specific type of application
with a pre-specified response
time.
v  An initial estimate is generated.
v Rate of prediction is
accompanied by the coming of
the client to the data center.
18
Classifier Type RMSE MAE RAE CC
Linear Reg. C1 0.0024 0.0008 50.33% 0.70
C2 0.0023 0.0001 57.29% 0.71
C3 0.0026 0.0003 58.15% 0.98
Sugeno FIS (SUB) C1 0.0021 0.0009 44.89% 0.66
C2 0.0012 0.0002 51.06% 0.66
C3 0.0011 0.0002 53.93% 0.85
Boosting (M5P) C1 0.0020 0.0006 34.59% 0.80
C2 0.0018 0.0007 39.20% 0.84
C3 0.0003 0.0001 10.99% 0.99
Bagging Tree
(M5P)
C1 0.0018 0.0005 32.57% 0.84
C2 0.0017 0.0007 36.38% 0.84
C3 0.0003 0.0001 11.82% 0.99
Validation: Perf. Measures for different prediction models
19
Resource Predictor: Learning Time Comparison
III. Resource Allocator
20
1.  Res Allocator initially allocates
resources ( based on the prediction
model).
2.  Check the error resulting from the
tuner.
3.  The tuner Calculates the normalized
error in resource allocation
4. Takes the feedback from the tuner
(ResAdjustment) and sends
a command to the VC in the VM
with the appropriate decision.
RespTimeError(k)=
RespTime(k)ref !RespTime(k)obs
RespTime(k)ref
21
IV. Resource Tuner : Rule-Based Fuzzy System
i/ps
o/p
RespTimeError
ClientClass
Status
ResDirection
ResController
( mamdani)
22
IV. Resource Tuner: Rule-Based Fuzzy System
!!!!
"#$%&'()($%!
!!!!!!!#$%*+',$-((.(!
/.0! 1$2'3,! 4'56!
!
!
!
78'$9:!!
;8<%%!
=.82! !!!!!!!! !
!
>?/!
!!!!!!!!>&1!
!
>?1!
!!!!!!>&4!
!
>?4!
>'8@$(! !!!!!!>&/!
!
! !!
!!!!!!!>&1!
!
>?1!
!!!!!>&4!
!
>?4!
A(.9B$! !!!!!!>&/!
! !!
!!!!!!!>&1!!!!!!!!!
! !
!!!!!!!>&4!
>?4!
!
Over provision / Under provision
-  Total # rules : 18
-  The grades of
membership of each
attribute (high,
medium , low) are
adjusted by experts
in the datacenter.
-  ResDir :
•  reflects a percentage of the resource that should be utilized in
the VC. (ResAdjust = ResDir x ResWt x VCres)
•  ranges [-1 +1] with MFs (low, med, high) for:
Ø  speed up (+ve side)
Ø  step down (-ve side)
V. Adaptive Learning
23
New incoming data will be fed into
the prediction model by different
ways (depending on the prediction
model used):
-  Directly via clustering (if
clustering is used as in TS-FIS)
=> online learning
-  OR it will be stored in the
database until a certain
threshold reached, then an ECA
rule is fired , initiating re-
modeling
=> offline learning
V. Adaptive Learning: update Rules in
Fuzzy Tuner FIS
24
Rule Editor
25
Resource Tuner validation - Example
!!!!
"#$%&'()($%!
!!!!!!!#$%*+',$-((.(!
/.0! 1$2'3,! 4'56!
!
!
!
78'$9:!!
;8<%%!
=.82! !!!!!!!! !
!
>?/!
!!!!!!!!>&1!
!
>?1!
!!!!!!>&4!
!
>?4!
>'8@$(! !!!!!!>&/!
!
! !!
!!!!!!!>&1!
!
>?1!
!!!!!>&4!
!
>?4!
A(.9B$! !!!!!!>&/!
! !!
!!!!!!!>&1!!!!!!!!!
! !
!!!!!!!>&4!
>?4!
! Over provision / Under provision
Method: Testing cases using the fuzzy rule viewer.
26
I/ps: RespTimeError : medium , client class : Gold and status: underprovision
O/p: ResDirection : SUM (speed up medium)
Resource Tuner Validation - Example
RespTimeError= 0.5 ClientClass= 0.9 status= 0.2 ResDirection = 0.5
27
I/ps: RespTimeError: medium, Client class: Silver, Status : underprovision
o/p: ResDirection is SUM (speed up medium)
Resource Tuner Validation - Example
RespTimeError= 0.5 ClientClass= 0.5 status= 0.2 ResDirection = 0.5
28
I/ps: RespTimeError : medium , client class : bronze , status: underprovision
o/p: ResDirection : noAction
Resource Tuner Validation - Example
RespTimeError= 0.5 ClientClass= 0.19 status= 0.2 ResDirection = 0.01
Conclusions
•  Proposed ML model predicts the right amount of
resources (Bagging/Boosting is promising).
•  The Fuzzy tuner
- Accommodates any deviation in workload c/cs.
- Enforces service differentiation.
•  Adaptive Learning guarantee having an up-to-date
model that lowers future SLA violations.
29
Questions ?
30

More Related Content

PDF
Fuzzy Self-Learning Controllers for Elasticity Management in Dynamic Cloud Ar...
PDF
Using Simple PID Controllers to Prevent and Mitigate Faults in Scientific Wor...
PPT
Clustering
PDF
Alex Smola, Professor in the Machine Learning Department, Carnegie Mellon Uni...
PDF
Fast Perceptron Decision Tree Learning from Evolving Data Streams
PPTX
A Scaleable Implementation of Deep Learning on Spark -Alexander Ulanov
PDF
Automatic Features Generation And Model Training On Spark: A Bayesian Approach
PDF
RUCK 2017 MxNet과 R을 연동한 딥러닝 소개
Fuzzy Self-Learning Controllers for Elasticity Management in Dynamic Cloud Ar...
Using Simple PID Controllers to Prevent and Mitigate Faults in Scientific Wor...
Clustering
Alex Smola, Professor in the Machine Learning Department, Carnegie Mellon Uni...
Fast Perceptron Decision Tree Learning from Evolving Data Streams
A Scaleable Implementation of Deep Learning on Spark -Alexander Ulanov
Automatic Features Generation And Model Training On Spark: A Bayesian Approach
RUCK 2017 MxNet과 R을 연동한 딥러닝 소개

What's hot (12)

PPTX
MLconf NYC Xiangrui Meng
PDF
Master's Thesis - climateprediction.net: A Cloudy Approach
PDF
Sergei Vassilvitskii, Research Scientist, Google at MLconf NYC - 4/15/16
PDF
Alex Smola, Director of Machine Learning, AWS/Amazon, at MLconf SF 2016
PDF
RUCK 2017 김성환 R 패키지 메타주성분분석(MetaPCA)
PDF
Efficient Online Evaluation of Big Data Stream Classifiers
PDF
Automatic Resource Elasticity for HPC Applications
PDF
Implementation of the fully adaptive radar framework: Practical limitations
PDF
Mining big data streams with APACHE SAMOA by Albert Bifet
PDF
Albert Bifet – Apache Samoa: Mining Big Data Streams with Apache Flink
PDF
QCon Rio - Machine Learning for Everyone
PDF
Machine Learning - Introduction
MLconf NYC Xiangrui Meng
Master's Thesis - climateprediction.net: A Cloudy Approach
Sergei Vassilvitskii, Research Scientist, Google at MLconf NYC - 4/15/16
Alex Smola, Director of Machine Learning, AWS/Amazon, at MLconf SF 2016
RUCK 2017 김성환 R 패키지 메타주성분분석(MetaPCA)
Efficient Online Evaluation of Big Data Stream Classifiers
Automatic Resource Elasticity for HPC Applications
Implementation of the fully adaptive radar framework: Practical limitations
Mining big data streams with APACHE SAMOA by Albert Bifet
Albert Bifet – Apache Samoa: Mining Big Data Streams with Apache Flink
QCon Rio - Machine Learning for Everyone
Machine Learning - Introduction
Ad

Viewers also liked (20)

PDF
Debug me
PDF
Intro to Machine Learning by Microsoft Ventures
PPTX
Deep Learning - What's the buzz all about
PDF
T2 fs talk
PDF
Introduction to Deep Learning
PDF
Web UI, Algorithms, and Feature Engineering
PDF
Detecting fraud with Python and machine learning
PDF
Using Deep Learning to do Real-Time Scoring in Practical Applications
PDF
Enterprise Knowledge Graph
PDF
Introduction to Deep learning
PDF
Machine Learning for Fraud Detection
PPTX
Introduction to machinel learning and deep learning
PDF
Deeplearning NLP
PPTX
Deep Learning in Robotics
PDF
Slides Bank England
PPTX
Fraud Detection Architecture
PDF
Deep learning - A Visual Introduction
PPTX
Machine Learning with Applications in Categorization, Popularity and Sequence...
PDF
COCOA: Communication-Efficient Coordinate Ascent
PDF
Machine Learning Pipelines
Debug me
Intro to Machine Learning by Microsoft Ventures
Deep Learning - What's the buzz all about
T2 fs talk
Introduction to Deep Learning
Web UI, Algorithms, and Feature Engineering
Detecting fraud with Python and machine learning
Using Deep Learning to do Real-Time Scoring in Practical Applications
Enterprise Knowledge Graph
Introduction to Deep learning
Machine Learning for Fraud Detection
Introduction to machinel learning and deep learning
Deeplearning NLP
Deep Learning in Robotics
Slides Bank England
Fraud Detection Architecture
Deep learning - A Visual Introduction
Machine Learning with Applications in Categorization, Popularity and Sequence...
COCOA: Communication-Efficient Coordinate Ascent
Machine Learning Pipelines
Ad

Similar to Noha danms13 talk_final (20)

PPTX
Ajila (1)
PPTX
Data-driven AI for Self-adaptive Information Systems
PDF
Machine learning and big data
PDF
لموعد الإثنين 03 يناير 2022 143 مبادرة #تواصل_تطوير المحاضرة ال 143 من المباد...
PDF
Intro to machine learning for web folks @ BlendWebMix
PDF
Thesis Presentation on Energy Efficiency Improvement in Data Centers
PPTX
Machine Learning & Predictive Maintenance
PDF
Web Service QoS Prediction Based on Adaptive Dynamic Programming Using Fuzzy ...
PPTX
AI-900 - Fundamental Principles of ML.pptx
PPTX
Using Machine Learning to Optimize DevOps Practices
PPTX
Thesis presentation: Applications of machine learning in predicting supply risks
PDF
A study of Machine Learning approach for Predictive Maintenance in Industry 4.0
PPTX
Intelligent Career Guidance System.pptx
PPTX
PDF
SP Big Data Meetup - March/16
PDF
rerngvit_phd_seminar
PDF
Towards a Unified Data Analytics Optimizer with Yanlei Diao
PDF
A business level introduction to Artificial Intelligence - Louis Dorard @ PAP...
PDF
Transfer Learning for Improving Model Predictions in Highly Configurable Soft...
PDF
Witekio introducing-predictive-maintenance
Ajila (1)
Data-driven AI for Self-adaptive Information Systems
Machine learning and big data
لموعد الإثنين 03 يناير 2022 143 مبادرة #تواصل_تطوير المحاضرة ال 143 من المباد...
Intro to machine learning for web folks @ BlendWebMix
Thesis Presentation on Energy Efficiency Improvement in Data Centers
Machine Learning & Predictive Maintenance
Web Service QoS Prediction Based on Adaptive Dynamic Programming Using Fuzzy ...
AI-900 - Fundamental Principles of ML.pptx
Using Machine Learning to Optimize DevOps Practices
Thesis presentation: Applications of machine learning in predicting supply risks
A study of Machine Learning approach for Predictive Maintenance in Industry 4.0
Intelligent Career Guidance System.pptx
SP Big Data Meetup - March/16
rerngvit_phd_seminar
Towards a Unified Data Analytics Optimizer with Yanlei Diao
A business level introduction to Artificial Intelligence - Louis Dorard @ PAP...
Transfer Learning for Improving Model Predictions in Highly Configurable Soft...
Witekio introducing-predictive-maintenance

Noha danms13 talk_final

  • 1. Autonomous Resource Provision in Virtual Data Centers Presented by: Noha Elprince noha.elprince@uwaterloo.ca IFIP/IEEE DANMS, 31 May 2013
  • 2. Unused resources 2 Demand Capacity Time Resources Demand Capacity Time Resources Static Data Centers vs. Dynamic Figure: RAD Lab, UC Berkeley •  Fixed pre-assigned Resources ( provision for peak ) •  Static Environment •  Manual change of configurations •  Cloud Elasticity ~ “Pay as you go” •  Virtualized Environment •  Automated “Self-service” change of configurations.
  • 3. 3 •  under-provisioning Heavy Penalty Lost revenue Lost users Resources Demand Capacity Time (days) 1 2 3Resources Demand Capacity Time (days) 1 2 3 Resources Demand Capacity Time (days) 1 2 3 3 Figures: RAD Lab, UC Berkeley Cloud Elasticity problems…
  • 4. Cloud Elasticity … 4 •  over-provisioning overutilization Demand Capacity Time Resources Unused resources Figure: RAD Lab, UC Berkeley
  • 5. 5 Virtualization (Cloud Foundation) •  Virtualization allows a computational resource to be partitioned on multiple isolated execution environments (VMs) •  Turning the machine into a “virtual image” ~ Self-immunity from: Ø  Hardware breakdowns. Ø  Running out of the resources Challenge: Service Differentiation
  • 6. Problem q  Over and under provisioning in spite of: difficulty of estimating the actual needs due to time- varying and diverse workload. q  Enabling service differentiation in a virtualized environment. 6
  • 7. Methodology •  Develop and implement an autonomic resource management controller that: Ø  Effectively optimize the resource by predicting current resource needs. Ø  Continuous resource self-tuning to accommodate load variations and enforce service differentiation during resource allocation. •  Test the proposed prototype on real traces. 7
  • 8. Motivation •  Help datacenters to manage resources effectively. •  Propagate Cloud Computing (increase cloud users => less expensive) •  Optimize resources (green I.T !) 8
  • 9. Related Work v Approaches for autonomic resource management: –  Utility based self-optimizing approach. –  Model-based approach based on perf. Modeling. –  Machine Learning Approach –  Fuzzy logic approach. 9
  • 10. Proposed Solution Architecture: Sys. Modeling 10 r(t+1)
  • 11. I. System Modeling : Data set 11 •  Idea: Learning from successful jobs: (normal termination, fulfill client’s anticipated perf.) •  A real computing center trace of Los Alamos National Lab (LANL) •  LANL is a United States Department of Energy (DOE) national laboratory. •  LANL conducts multidisciplinary research in fields such as national security , space exploration, renewable energy, medicine, nanotechnology and supercomputing. •  System: 1024-node Connection Machine CM-5 from Thinking Machines •  Jobs: 201,387 , Duration: 2 Yrs.
  • 12. 12 v  Feature Selection •  Use stepwise regression to: Sort variables out & leave more influential ones in the model. •  Results: Out of 18 features, 5 features were selected: => run_time, wait_time, Avg_cpu_time, used_mem, status v  Filter •  Remove jobs with status =unsuccessful ( failed/ aborted ) •  Discard records that have average_cpu_time_used <=0 and used_mem <=0 v  Data Cleaning •  Normalize data to remove noise I. System Modeling : Data Preprocessing
  • 13. 13 I. System Modeling : Statistical Analysis
  • 14. 14 ! Cascaded classifiers (MISO model) I. System Modeling : Model I/O
  • 15. 15 §  Linear Regression §  Sugeno Fuzzy Inference System (FCM, SUB) §  Regression Tree (REP-Tree) §  Model Tree (M5P) §  Boosting (Rep-Tree, M5P) §  Bagging (Rep-Tree, M5P) I. System Modeling : ML approaches Why ML ? - Due to the Non-linear nature of the data -  Ability to deal with complex nature of data. -  Detect dependency between i/ps and o/ps efficiently.
  • 16. 16 Bagging vs. Boosting Classifiers •  Bagging (Bootstrap aggregating) uses bootstrap sampling. •  Trains k classifier on each bootstrap sample. •  A weighted majority (voting) of the k learned classifiers (using equal weights). •  Boosting: weak classifiers form a final strong classifier. •  After a weak learner is added, the data is reweighted: Ø  misclassified examples=> gain weight Ø  examples classified correctly => lose weight. •  Thus future learners focus more on the data that previous weak learners misclassified.
  • 17. II. Res. Predictor 17 v The client requests hosting a specific type of application with a pre-specified response time. v  An initial estimate is generated. v Rate of prediction is accompanied by the coming of the client to the data center.
  • 18. 18 Classifier Type RMSE MAE RAE CC Linear Reg. C1 0.0024 0.0008 50.33% 0.70 C2 0.0023 0.0001 57.29% 0.71 C3 0.0026 0.0003 58.15% 0.98 Sugeno FIS (SUB) C1 0.0021 0.0009 44.89% 0.66 C2 0.0012 0.0002 51.06% 0.66 C3 0.0011 0.0002 53.93% 0.85 Boosting (M5P) C1 0.0020 0.0006 34.59% 0.80 C2 0.0018 0.0007 39.20% 0.84 C3 0.0003 0.0001 10.99% 0.99 Bagging Tree (M5P) C1 0.0018 0.0005 32.57% 0.84 C2 0.0017 0.0007 36.38% 0.84 C3 0.0003 0.0001 11.82% 0.99 Validation: Perf. Measures for different prediction models
  • 20. III. Resource Allocator 20 1.  Res Allocator initially allocates resources ( based on the prediction model). 2.  Check the error resulting from the tuner. 3.  The tuner Calculates the normalized error in resource allocation 4. Takes the feedback from the tuner (ResAdjustment) and sends a command to the VC in the VM with the appropriate decision. RespTimeError(k)= RespTime(k)ref !RespTime(k)obs RespTime(k)ref
  • 21. 21 IV. Resource Tuner : Rule-Based Fuzzy System i/ps o/p RespTimeError ClientClass Status ResDirection ResController ( mamdani)
  • 22. 22 IV. Resource Tuner: Rule-Based Fuzzy System !!!! "#$%&'()($%! !!!!!!!#$%*+',$-((.(! /.0! 1$2'3,! 4'56! ! ! ! 78'$9:!! ;8<%%! =.82! !!!!!!!! ! ! >?/! !!!!!!!!>&1! ! >?1! !!!!!!>&4! ! >?4! >'8@$(! !!!!!!>&/! ! ! !! !!!!!!!>&1! ! >?1! !!!!!>&4! ! >?4! A(.9B$! !!!!!!>&/! ! !! !!!!!!!>&1!!!!!!!!! ! ! !!!!!!!>&4! >?4! ! Over provision / Under provision -  Total # rules : 18 -  The grades of membership of each attribute (high, medium , low) are adjusted by experts in the datacenter. -  ResDir : •  reflects a percentage of the resource that should be utilized in the VC. (ResAdjust = ResDir x ResWt x VCres) •  ranges [-1 +1] with MFs (low, med, high) for: Ø  speed up (+ve side) Ø  step down (-ve side)
  • 23. V. Adaptive Learning 23 New incoming data will be fed into the prediction model by different ways (depending on the prediction model used): -  Directly via clustering (if clustering is used as in TS-FIS) => online learning -  OR it will be stored in the database until a certain threshold reached, then an ECA rule is fired , initiating re- modeling => offline learning
  • 24. V. Adaptive Learning: update Rules in Fuzzy Tuner FIS 24 Rule Editor
  • 25. 25 Resource Tuner validation - Example !!!! "#$%&'()($%! !!!!!!!#$%*+',$-((.(! /.0! 1$2'3,! 4'56! ! ! ! 78'$9:!! ;8<%%! =.82! !!!!!!!! ! ! >?/! !!!!!!!!>&1! ! >?1! !!!!!!>&4! ! >?4! >'8@$(! !!!!!!>&/! ! ! !! !!!!!!!>&1! ! >?1! !!!!!>&4! ! >?4! A(.9B$! !!!!!!>&/! ! !! !!!!!!!>&1!!!!!!!!! ! ! !!!!!!!>&4! >?4! ! Over provision / Under provision Method: Testing cases using the fuzzy rule viewer.
  • 26. 26 I/ps: RespTimeError : medium , client class : Gold and status: underprovision O/p: ResDirection : SUM (speed up medium) Resource Tuner Validation - Example RespTimeError= 0.5 ClientClass= 0.9 status= 0.2 ResDirection = 0.5
  • 27. 27 I/ps: RespTimeError: medium, Client class: Silver, Status : underprovision o/p: ResDirection is SUM (speed up medium) Resource Tuner Validation - Example RespTimeError= 0.5 ClientClass= 0.5 status= 0.2 ResDirection = 0.5
  • 28. 28 I/ps: RespTimeError : medium , client class : bronze , status: underprovision o/p: ResDirection : noAction Resource Tuner Validation - Example RespTimeError= 0.5 ClientClass= 0.19 status= 0.2 ResDirection = 0.01
  • 29. Conclusions •  Proposed ML model predicts the right amount of resources (Bagging/Boosting is promising). •  The Fuzzy tuner - Accommodates any deviation in workload c/cs. - Enforces service differentiation. •  Adaptive Learning guarantee having an up-to-date model that lowers future SLA violations. 29