SlideShare a Scribd company logo
Handover Parameters
Self-Optimization
by Q-learning in 4G Networks
Realized by: Mohamed Raafat OMRI
Supervised by: Ph.D Maissa BOUJELBEN
July 2016
Plan
• General context (overview of LTE-A).
• Existing solutions.
• Proposed approach.
• Simulation results.
• Conclusion & perspectives.
• References.
2
General context
• Overview of LTE-Advanced:
LTE-A: mobile communication standard,
formally submitted as a candidate 4G
system to ITU-T in late 2009.
 Approved into ITU, IMT-Advanced and
finalized by 3GPP in March 2011.
3
LTE: Self Optimizing Network
Self Configuration: plug-and-play
configuration of newly deployed eNBs.
Self Optimizing: optimization of coverage,
capacity, handover and interference.
Self Healing: automatic correction of
capacity problems.
4
LTE-A development history
5
Problematic
Handover definition:
Key procedure ensuring users moving
freely through the network while still
connected and offered quality services.
Moving the resource allocation UE from
one base station to another.
6
7
• Handover problems:
Radio Link Failure by:
Too early HO.
Too late HO.
Handover to the wrong cell.
Unnecessary handover:
Ping-pong & continuous HO.
8
Urgent need to optimize HO parameters:
• TTT: applying Time-to-Trigger, Handover
initiated only if the triggering requirement
fulfilled for a time interval.
• Hysteresis: Handover initiated if the link
quality of another cell is better than current
link quality by a hysteresis value.
9
Existing Solution
• Mobility Robustness Optimization:
Use case (Rel. 11) enabling detection and
providing tools for possible correction of
following problems:
Ping-pongs in idle mode.
HO to wrong cell that does not cause
connection failure.
10
• QMRO:
Q-Learning for MRO.
Abstacting the velocities of the mobiles
(or UEs) into a finite set of mobility
states, so as to learn the OTP (Optimum
Trigger Point) for each state.
11
Proposed Approach
• Limitations of the existing solution:
MRO: standard without any methods
implementaion.
QMRO: complicated solution.
• Proposed solution:
Q-Learning.
12
Q-Learning
• Q-Learning: type of reinforcement learning
algorithm with agent tri to discover an
optimal policy from its history of
interactions within an environment.
• Machine learning: form of Artificial
Intelligence (AI) designing and studying
systems with the ability to learn from data.
13
The basic elements required for
reinforcement learning:
• A Model (M) of the environment: set of
States (S) and Actions (A).
• A reward function (R).
• A value function (V).
• A policy (P).
14
Set of states: TTT (20) & Hys. (15)
HYS
TTT
0 1 2 3 4 5 6 7 8 9 10 11...20
0.0 0.5 1.0 1.5 2.0 2.5 3.0 3.5 4.0 4.5 5.0 5.5
0 0.0 000 001 002 003 004 005 006 007 008 009 010 011
1 0.04 021 022 023 024 025 026 027 028 029 030 031 032
2 0.064 042 043 044 045 046 047 048 049 050 051 052 053
3 0.08 063 064 065 066 067 068 069 070 071 072 073 074
4 0.1 084 085 086 087 088 089 090 091 092 093 094 095
5 0.128 105 106 107 108 109 110 111 112 113 114 115 116
6 0.16 126 127 128 129 130 131 132 133 134 135(6)
136(5)
137(8)
7 0.256 147 148 149 150 151 152 153 154 155 156(4)
157 158(1)
8 0.32 168 169 170 171 172 173 174 175 176 177(7)
178(2)
179(3)
9...15 0.48 189 190 191 192 193 194 195 196 197 198 199 200
15
States, Actions & Reward
• Set of states: 336 states by increasing or
decreasing simultaneously the TTT & Hys.
values.
• Set of actions: 8 possible actions for
each state.
• Reward:
𝐇𝐚𝐧𝐝𝐨𝐯𝐞𝐫 𝐬𝐮𝐜𝐜𝐞𝐬𝐬𝐟𝐮𝐥 /(𝟏𝟎 ∗ 𝐃𝐫𝐨𝐩𝐬 + 𝟐 ∗ 𝐏𝐢𝐧𝐠𝐏𝐨𝐧𝐠𝐬)
16
Exemple: State 157 possible actions
17
Simulation results
Parameter Value
Number of eNodeB 9
Number of UE 10
Mobility model Random Waypoint Model
Propagation model Cost231-Hata
18
19
20
Conclustion & Perspectives
Research and not production domain.
Many thesis and research documents to
read.
Pertinent documents available only in
english.
Complex algorithm to implement.
Simulation.
21
Perspectives
• By talking of Handover and SON, we
cannot not ignore the interference. Further
projects can investigate the interaction
between two major technical challenges
for LTE-A cells deployment, in order to
face the explosive increase of the traffic
growth.
22
References
3GPP.
IEEE- Springer.
Distributed Cooperative Q-Learning for
Mobility-Sensitive Handover Optimization
in LTE SON (Stephen S. Mwanje, Andreas
Mitschele-Thiel)
23
24

More Related Content

PDF
Best practices-lte-call-flow-guide
PDF
Lte kpi accessability
PDF
LTE KPI
PDF
Lte optimization
PPTX
Volte troubleshooting
PDF
1.training lte ran kpi & counters rjil
PDF
Initial tuning umts (2)
Best practices-lte-call-flow-guide
Lte kpi accessability
LTE KPI
Lte optimization
Volte troubleshooting
1.training lte ran kpi & counters rjil
Initial tuning umts (2)

What's hot (20)

PPTX
Radio Measurements in LTE
PPT
Umts Kpi
PDF
LTE-RF Drive test .pdf
PDF
Ericsson important optimization parameters
PPT
3 g huawei-wcdma-rno-parameters-optimization
PPTX
Long Range Cell Coverage for LTE
PPT
Lte rf-optimization-guide
PPT
01 lte radio_parameters_lte_overview_rl1
PPT
Lte ho parameters trial_01262011
PDF
LTE KPI and PI Formula_NOKIA.pdf
PPTX
LTE KPI Optimization - A to Z Abiola.pptx
PDF
End-to-End QoS in LTE
PPT
06a_LTE mobility management v1_0.ppt
PDF
LTE Procedures
PPTX
RF Planning & Optimization
PPTX
4G Handovers || LTE Handovers ||
PPTX
AIRCOM LTE Webinar 6 - Comparison between GSM, UMTS & LTE
DOCX
190937694 csfb-call-flows
PPTX
422738668-LTE-Downlink-Throughput-Optimization-Based-on-Performance-Data [Rep...
DOC
Factors affecting lte throughput and calculation methodology
Radio Measurements in LTE
Umts Kpi
LTE-RF Drive test .pdf
Ericsson important optimization parameters
3 g huawei-wcdma-rno-parameters-optimization
Long Range Cell Coverage for LTE
Lte rf-optimization-guide
01 lte radio_parameters_lte_overview_rl1
Lte ho parameters trial_01262011
LTE KPI and PI Formula_NOKIA.pdf
LTE KPI Optimization - A to Z Abiola.pptx
End-to-End QoS in LTE
06a_LTE mobility management v1_0.ppt
LTE Procedures
RF Planning & Optimization
4G Handovers || LTE Handovers ||
AIRCOM LTE Webinar 6 - Comparison between GSM, UMTS & LTE
190937694 csfb-call-flows
422738668-LTE-Downlink-Throughput-Optimization-Based-on-Performance-Data [Rep...
Factors affecting lte throughput and calculation methodology
Ad

Similar to Handover Parameters Self Optimization in LTE-A Networks (20)

PDF
From LTE to LTE-A
PPTX
Optimising MIMO Wi-Fi Performance
PPTX
Life in the Fast Lane: A Line-Rate Linear Road
PPTX
Rapid motor adaptation for legged robots
PDF
Abstract + Poster (MSc Thesis)
DOCX
Page 1 Planning LTE Network Deployments Introd.docx
DOCX
Page 1 Planning LTE Network Deployments Introduction LTE (Long Term .docx
PDF
Mohamed_Talaat_October_2015_CV
DOC
irshadali_CV
PDF
2021 itu challenge_reinforcement_learning
PDF
Analysis of computational
PDF
Automated Machine Learning via Sequential Uniform Designs
PDF
3GPP SON Series: Coverage and Capacity Optimization (CCO)
PDF
Webinar: How to design express services on a bus transit network
PDF
Zd n2010 son-in_4g_mobile_networks (1)
PPTX
[20240603_LabSeminar_Huy]TransMOT: Spatial-Temporal Graph Transformer for Mul...
PPTX
SPLT Transformer.pptx
PPTX
LTE Release 13 and SMARTER – Road Towards 5G
PDF
rerngvit_phd_seminar
PDF
Comparative Analysis of Tuning Hyperparameters in Policy-Based DRL Algorithm ...
From LTE to LTE-A
Optimising MIMO Wi-Fi Performance
Life in the Fast Lane: A Line-Rate Linear Road
Rapid motor adaptation for legged robots
Abstract + Poster (MSc Thesis)
Page 1 Planning LTE Network Deployments Introd.docx
Page 1 Planning LTE Network Deployments Introduction LTE (Long Term .docx
Mohamed_Talaat_October_2015_CV
irshadali_CV
2021 itu challenge_reinforcement_learning
Analysis of computational
Automated Machine Learning via Sequential Uniform Designs
3GPP SON Series: Coverage and Capacity Optimization (CCO)
Webinar: How to design express services on a bus transit network
Zd n2010 son-in_4g_mobile_networks (1)
[20240603_LabSeminar_Huy]TransMOT: Spatial-Temporal Graph Transformer for Mul...
SPLT Transformer.pptx
LTE Release 13 and SMARTER – Road Towards 5G
rerngvit_phd_seminar
Comparative Analysis of Tuning Hyperparameters in Policy-Based DRL Algorithm ...
Ad

Recently uploaded (20)

PPTX
Tartificialntelligence_presentation.pptx
PDF
Heart disease approach using modified random forest and particle swarm optimi...
PDF
Profit Center Accounting in SAP S/4HANA, S4F28 Col11
PDF
Assigned Numbers - 2025 - Bluetooth® Document
PDF
Mushroom cultivation and it's methods.pdf
PDF
WOOl fibre morphology and structure.pdf for textiles
PDF
Univ-Connecticut-ChatGPT-Presentaion.pdf
PDF
August Patch Tuesday
PDF
Getting Started with Data Integration: FME Form 101
PDF
Zenith AI: Advanced Artificial Intelligence
PDF
1 - Historical Antecedents, Social Consideration.pdf
PDF
A comparative analysis of optical character recognition models for extracting...
PDF
project resource management chapter-09.pdf
PDF
Encapsulation_ Review paper, used for researhc scholars
PDF
Encapsulation theory and applications.pdf
PDF
From MVP to Full-Scale Product A Startup’s Software Journey.pdf
PPTX
cloud_computing_Infrastucture_as_cloud_p
PDF
NewMind AI Weekly Chronicles - August'25-Week II
PDF
Unlocking AI with Model Context Protocol (MCP)
PPTX
KOM of Painting work and Equipment Insulation REV00 update 25-dec.pptx
Tartificialntelligence_presentation.pptx
Heart disease approach using modified random forest and particle swarm optimi...
Profit Center Accounting in SAP S/4HANA, S4F28 Col11
Assigned Numbers - 2025 - Bluetooth® Document
Mushroom cultivation and it's methods.pdf
WOOl fibre morphology and structure.pdf for textiles
Univ-Connecticut-ChatGPT-Presentaion.pdf
August Patch Tuesday
Getting Started with Data Integration: FME Form 101
Zenith AI: Advanced Artificial Intelligence
1 - Historical Antecedents, Social Consideration.pdf
A comparative analysis of optical character recognition models for extracting...
project resource management chapter-09.pdf
Encapsulation_ Review paper, used for researhc scholars
Encapsulation theory and applications.pdf
From MVP to Full-Scale Product A Startup’s Software Journey.pdf
cloud_computing_Infrastucture_as_cloud_p
NewMind AI Weekly Chronicles - August'25-Week II
Unlocking AI with Model Context Protocol (MCP)
KOM of Painting work and Equipment Insulation REV00 update 25-dec.pptx

Handover Parameters Self Optimization in LTE-A Networks

  • 1. Handover Parameters Self-Optimization by Q-learning in 4G Networks Realized by: Mohamed Raafat OMRI Supervised by: Ph.D Maissa BOUJELBEN July 2016
  • 2. Plan • General context (overview of LTE-A). • Existing solutions. • Proposed approach. • Simulation results. • Conclusion & perspectives. • References. 2
  • 3. General context • Overview of LTE-Advanced: LTE-A: mobile communication standard, formally submitted as a candidate 4G system to ITU-T in late 2009.  Approved into ITU, IMT-Advanced and finalized by 3GPP in March 2011. 3
  • 4. LTE: Self Optimizing Network Self Configuration: plug-and-play configuration of newly deployed eNBs. Self Optimizing: optimization of coverage, capacity, handover and interference. Self Healing: automatic correction of capacity problems. 4
  • 6. Problematic Handover definition: Key procedure ensuring users moving freely through the network while still connected and offered quality services. Moving the resource allocation UE from one base station to another. 6
  • 7. 7
  • 8. • Handover problems: Radio Link Failure by: Too early HO. Too late HO. Handover to the wrong cell. Unnecessary handover: Ping-pong & continuous HO. 8
  • 9. Urgent need to optimize HO parameters: • TTT: applying Time-to-Trigger, Handover initiated only if the triggering requirement fulfilled for a time interval. • Hysteresis: Handover initiated if the link quality of another cell is better than current link quality by a hysteresis value. 9
  • 10. Existing Solution • Mobility Robustness Optimization: Use case (Rel. 11) enabling detection and providing tools for possible correction of following problems: Ping-pongs in idle mode. HO to wrong cell that does not cause connection failure. 10
  • 11. • QMRO: Q-Learning for MRO. Abstacting the velocities of the mobiles (or UEs) into a finite set of mobility states, so as to learn the OTP (Optimum Trigger Point) for each state. 11
  • 12. Proposed Approach • Limitations of the existing solution: MRO: standard without any methods implementaion. QMRO: complicated solution. • Proposed solution: Q-Learning. 12
  • 13. Q-Learning • Q-Learning: type of reinforcement learning algorithm with agent tri to discover an optimal policy from its history of interactions within an environment. • Machine learning: form of Artificial Intelligence (AI) designing and studying systems with the ability to learn from data. 13
  • 14. The basic elements required for reinforcement learning: • A Model (M) of the environment: set of States (S) and Actions (A). • A reward function (R). • A value function (V). • A policy (P). 14
  • 15. Set of states: TTT (20) & Hys. (15) HYS TTT 0 1 2 3 4 5 6 7 8 9 10 11...20 0.0 0.5 1.0 1.5 2.0 2.5 3.0 3.5 4.0 4.5 5.0 5.5 0 0.0 000 001 002 003 004 005 006 007 008 009 010 011 1 0.04 021 022 023 024 025 026 027 028 029 030 031 032 2 0.064 042 043 044 045 046 047 048 049 050 051 052 053 3 0.08 063 064 065 066 067 068 069 070 071 072 073 074 4 0.1 084 085 086 087 088 089 090 091 092 093 094 095 5 0.128 105 106 107 108 109 110 111 112 113 114 115 116 6 0.16 126 127 128 129 130 131 132 133 134 135(6) 136(5) 137(8) 7 0.256 147 148 149 150 151 152 153 154 155 156(4) 157 158(1) 8 0.32 168 169 170 171 172 173 174 175 176 177(7) 178(2) 179(3) 9...15 0.48 189 190 191 192 193 194 195 196 197 198 199 200 15
  • 16. States, Actions & Reward • Set of states: 336 states by increasing or decreasing simultaneously the TTT & Hys. values. • Set of actions: 8 possible actions for each state. • Reward: 𝐇𝐚𝐧𝐝𝐨𝐯𝐞𝐫 𝐬𝐮𝐜𝐜𝐞𝐬𝐬𝐟𝐮𝐥 /(𝟏𝟎 ∗ 𝐃𝐫𝐨𝐩𝐬 + 𝟐 ∗ 𝐏𝐢𝐧𝐠𝐏𝐨𝐧𝐠𝐬) 16
  • 17. Exemple: State 157 possible actions 17
  • 18. Simulation results Parameter Value Number of eNodeB 9 Number of UE 10 Mobility model Random Waypoint Model Propagation model Cost231-Hata 18
  • 19. 19
  • 20. 20
  • 21. Conclustion & Perspectives Research and not production domain. Many thesis and research documents to read. Pertinent documents available only in english. Complex algorithm to implement. Simulation. 21
  • 22. Perspectives • By talking of Handover and SON, we cannot not ignore the interference. Further projects can investigate the interaction between two major technical challenges for LTE-A cells deployment, in order to face the explosive increase of the traffic growth. 22
  • 23. References 3GPP. IEEE- Springer. Distributed Cooperative Q-Learning for Mobility-Sensitive Handover Optimization in LTE SON (Stephen S. Mwanje, Andreas Mitschele-Thiel) 23
  • 24. 24