SlideShare a Scribd company logo
Game Balancing with
Ecosystem Mechanism
1
Wen Xia, Bhojan Anand (presenter)
Background
2
Balance Control
3
Too easy
Too hard
Time/Skill
Difficulty
State of Art Approaches for Game Balancing
Offline Strategies
• Pre-defined behaviors for agents
• Manual adjustment and update game versions
• Data collection and offline learning
• Quantitative methods
• Etc.
4
State of Art Approaches for Game Balancing Cont.
Online/Real-time Strategies - DDA (most common)
• Adaptive Behavior of Agents (power, size, attack frequency …)
• PCG (Procedural content generation, eg more weapons, new weapons,
buildings )
5
Online Strategies
 Adaptive Behavior
• Dynamic scripting
• Real-time genetic control
• PSO-ANN model
• Etc.
6
Online Strategies Cont.
 PCG
• Experience-driven: personalized content
• Search-based: generate and test
• Learning-based: modeling game with content features
• Etc.
7
Problems
 Evolution on behavior and content simultaneously is seldom considered
• Evolving either one will influence the performance of the other one in total outcome.
• Integration of different learning strategies will cause developers extra work.
 Diversity on results is ignored
• Diversity on contents and behaviors creates curiosity and fun.
• Players usually do not notice the little difference among “best solution” and “good solutions”.
 Analyzing player-log causes heavy workload.
• Mapping from player-log to features, and from features to game content causes too much manual work.
8
System Design
9
Game
System
Vision
10
Diversity
Behavior
Content
Balance
Easily Design
& Maintain
Ecosystem & Game Environment
11
 Similarities
• Diversity of agents with simple behavior rules.
• Different types of agents may have strength gaps.
• Need to be balanced.
 Differences
• Ecosystem agents have complicated relationships among each other.
• Game agents mostly interact with player’s character.
Ecosystem Macro-mechanism
 Balance point
• Resource limitation
 Cycle chain
• Different species play different roles
 Diversification
• Same population, not always same behavior
12
Ecosystem Macro-mechanism Cont.
 Balance point
• Resource limitation
 Cycle chain
• Different species play different roles
 Diversification
• Same population, not always same behavior
13
Total balance goal
Individual balance goals
Action weight diversity
Ecosystem Micro-mechanism
 Simple behavior for one individual
• What to eat, how to move, etc.
 Local perception
• Self knowledge, neighbor communication
 Evolution
• Next generations will be evolved
14
Ecosystem Micro-mechanism Cont.
 Simple behavior for one individual
• What to eat, how to move, etc.
 Local perception
• Self knowledge, neighbor communication
 Evolution
• Next generations will be evolved
15
Swarm intelligence
Architecture
Particle containing Weights
[Number and types of
Game Objects]
16
(Behaviour)
(Content)
Single Swarm
Swarm
Particle containing
behaviour parameters
of a game object in
the swarm
Weight
* EIM – Ecosystem Implementing Model (a Swarm)
Diversity Rationale
 All swarms and agents (objects) have
random initial positions (including
newly born agents).
 Different swarms may result in different
local optima.
 Stable agent will randomly mutate (low
possibility).
17
Evolution Space
Diversity Rationale Cont.
 Example of one set of weights of “bot”
agent
• 3 swarms, 3 action weights
• Agents within same swarm have similar but not
same weights
• Some swarms contain randomly mutated agents
(6th of top, 1st of bottom)
18
Learning Flow
n, k are manually settled.
Stochastic environment → n++
Large action base → k++
19
* Goals in Level1 and Level 2 should be
aligned with the ‘Game Balancing’ factor
Goals
 Player’s Power Variables
• Objective variables: HP, armor, weapon strength, character level, etc.
• Subjective variables: proficiency, habit, emotional condition, etc.
• Networking variables: team cooperation/competition.
 System Requirements
• Set goals within evolution range manually.
• Balancing Goal (eg, based on objective variables)
• Full understanding of game content/agents/actions.
20
Future works
Current goal
Test Implementation & Evaluation
21
Test Scene
22
Numerical Settings
Demo
Game settings
HP remain
Agent number of each types
Learning record
Individual
performanceGame settings
Evaluation Objectives
1) Adaption
We set different maximum HP and armor of the player, in order to prove that our
approach is able to adapt when the properties of the player change.
2) Diversification
We need to prove that the results of behaviours and content types will not always
be similar if we repeat same test cases again.
23
Adaptivity Results (armor)
 Final error: ± 5%
 Passed Results: 8
 Failed Results: 1
• 1B
• Caused by early convergence of PSO
24
Repeated, A-C
have same
conditions
Adaptivity Results (HP)
 Final error: ± 5%
 Passed Results: 9
 Observation: higher HP, more
stable learning curve
• Same amount of object number
change, less influence on total error.
25
Diversity Results (content & behavior)
26
Complete Results of 18 Tests
https://docs.google.com/spreadsheets/d/1U05mqhuwOBv71ieFPqBjkTK2JHE04w
1fqc6TcJNSwx4/edit#gid=229731456
27
Disadvantages & Future Work
 Not so fast
• Everytime when updated, need to test again by at least one iteration/generation to collect
feedback from game environment.
 Large error in first several iteration/generations
• Player might feel weird.
 Cannot handle aggregation
• Advanced learning algorithm might support.
28
Q&A
29
MORE SLIDES & DEMO – for Q&A
Support!
30
Test Settings for Learning Component
31
Test Settings for Game Objects
32
Test Cases
33
Balance Goals
34
Bots 30 damage / sec
Ghost 60 damage / sec
Succubus 110 damage / sec
Player’s HP at the end of one generation 0
Back to Test Scene
Demo – Game Balancing
v: velocity (disposition), x: position (state), p: best position,
t: current time step, i: individual, g: neighbor,
ω: inertia weight, c: cognitive weight,
r: random number ∈ (0, 1)
Particle Swarm Optimization (PSO)
36
PSO Cont.
37
Variances of PSO
Multi-swarm Optimization
Hierarchical Swarm Optimization
Waves of Swarm Particles
...
38
Back to System Design

More Related Content

PDF
Game balancing with ecosystem mechanism
PDF
Testing hybrid computational intelligence algorithms for general game playing...
PDF
This was a triumph: Evolving intelligent bots for videogames. And for Science.
PDF
Dissertation defense
PDF
ECE-Swarm-Intelligence-SI-PPT.pdf.......
PPT
Swarm intelligence
PPTX
ECE CSE Soft Computing Swarm Intelligence (SI) PPT.pptx
PPSX
Game balancing with ecosystem mechanism
Testing hybrid computational intelligence algorithms for general game playing...
This was a triumph: Evolving intelligent bots for videogames. And for Science.
Dissertation defense
ECE-Swarm-Intelligence-SI-PPT.pdf.......
Swarm intelligence
ECE CSE Soft Computing Swarm Intelligence (SI) PPT.pptx

Similar to Game Balancing with Ecosystem Mechanism (20)

PDF
Artificial Inteligence for Games an Overview SBGAMES 2012
PDF
An Introduction to Game Balance - Alireza Ranjbar Shourabi
PPTX
November 16, Learning
PDF
Ndss 2016 game_bot_final_no_video
PPT
Swarm intelligence pso and aco
PPTX
CptS 440/ 540 AI.pptx
PPTX
Approaches to game AI overview
PDF
A Procedural Balanced Map Generator with Self-Adaptive Complexity for the Rea...
PDF
Evolutionary Design of Swarms (SSCI 2014)
PPT
PPSN 2004 - 3rd session
PDF
Applicability of Interactive Genetic Algorithms to Multi-agent Systems: Exper...
PPT
Swarm intel
ODP
Ants iwann jcc
PDF
Ultra Fast, Cross Genre, Procedural Content Generation in Games [Master Thesis]
PDF
CoSECiVi'15 - An overview on the termination conditions in the evolution of g...
PDF
M017127578
PDF
Application of Genetic Algorithm and Particle Swarm Optimization in Software ...
PDF
Using Self-Adaptive Evolutionary Algorithms to Evolve Dynamism-Oriented Maps ...
PPTX
PPT
Swarm intelligence
Artificial Inteligence for Games an Overview SBGAMES 2012
An Introduction to Game Balance - Alireza Ranjbar Shourabi
November 16, Learning
Ndss 2016 game_bot_final_no_video
Swarm intelligence pso and aco
CptS 440/ 540 AI.pptx
Approaches to game AI overview
A Procedural Balanced Map Generator with Self-Adaptive Complexity for the Rea...
Evolutionary Design of Swarms (SSCI 2014)
PPSN 2004 - 3rd session
Applicability of Interactive Genetic Algorithms to Multi-agent Systems: Exper...
Swarm intel
Ants iwann jcc
Ultra Fast, Cross Genre, Procedural Content Generation in Games [Master Thesis]
CoSECiVi'15 - An overview on the termination conditions in the evolution of g...
M017127578
Application of Genetic Algorithm and Particle Swarm Optimization in Software ...
Using Self-Adaptive Evolutionary Algorithms to Evolve Dynamism-Oriented Maps ...
Swarm intelligence
Ad

More from Anand Bhojan (20)

PDF
CharNeRF: 3D Character Generation from Concept Art using Neural Radiance Field
PDF
Metaverse - The 'Killer App' for 5G, 6G and Beyond
PDF
Multimedia Analytics with 5G Edge Nodes
PDF
Adaptive Video Content Manipulation for OLED Display Power Management
PPTX
CloudHide: Towards Latency Hiding Techniques for Thin-client Cloud Gaming
PDF
SuperStreamer: Enabling Progressive Content Streaming in a Game Engine
PDF
TIṬAL – Asynchronous multiplayer shooter with procedurally generated maps
PDF
Large-scale Media Processing on Cloud - Cloud Asia 2016 PANEL DISCUSSION
PDF
Introduction to the Special issue on ‘‘Future trends in robotics and autonomo...
PDF
ShowNTell: An easy-to-use tool for answering students’ questions with voice-o...
PDF
ShowNTell: An easy-to-use tool for answering students’ questions with voice-o...
PDF
mumble: Framework for Seamless Message Transfer on Smartphones
PDF
mumble: Framework for Seamless Message Transfer on Smartphones
PDF
Energy Efficient Multi-player Smartphone Gaming using 3D Spatial Subdivisioni...
PDF
ARENA - Dynamic Run-time Map Generation for Multiplayer Shooters [Full Text]
PDF
PARVAI - HVS Aware Adaptive Display Power Management for Mobile Games [Full T...
PDF
Gamelets - Multiplayer Mobile Games with Distributed Micro-Clouds [Full Text]
PDF
Energy Efficient Mobile Applications with Mobile Cloud Computing ( MCC )
PPTX
ARENA - Dynamic Run-time Map Generation for Multiplayer Shooters
PPTX
PARVAI - HVS Aware Adaptive Display Power Management for Mobile Games
CharNeRF: 3D Character Generation from Concept Art using Neural Radiance Field
Metaverse - The 'Killer App' for 5G, 6G and Beyond
Multimedia Analytics with 5G Edge Nodes
Adaptive Video Content Manipulation for OLED Display Power Management
CloudHide: Towards Latency Hiding Techniques for Thin-client Cloud Gaming
SuperStreamer: Enabling Progressive Content Streaming in a Game Engine
TIṬAL – Asynchronous multiplayer shooter with procedurally generated maps
Large-scale Media Processing on Cloud - Cloud Asia 2016 PANEL DISCUSSION
Introduction to the Special issue on ‘‘Future trends in robotics and autonomo...
ShowNTell: An easy-to-use tool for answering students’ questions with voice-o...
ShowNTell: An easy-to-use tool for answering students’ questions with voice-o...
mumble: Framework for Seamless Message Transfer on Smartphones
mumble: Framework for Seamless Message Transfer on Smartphones
Energy Efficient Multi-player Smartphone Gaming using 3D Spatial Subdivisioni...
ARENA - Dynamic Run-time Map Generation for Multiplayer Shooters [Full Text]
PARVAI - HVS Aware Adaptive Display Power Management for Mobile Games [Full T...
Gamelets - Multiplayer Mobile Games with Distributed Micro-Clouds [Full Text]
Energy Efficient Mobile Applications with Mobile Cloud Computing ( MCC )
ARENA - Dynamic Run-time Map Generation for Multiplayer Shooters
PARVAI - HVS Aware Adaptive Display Power Management for Mobile Games
Ad

Recently uploaded (20)

PDF
2025 Textile ERP Trends: SAP, Odoo & Oracle
PDF
Designing Intelligence for the Shop Floor.pdf
PPTX
ai tools demonstartion for schools and inter college
PPTX
VVF-Customer-Presentation2025-Ver1.9.pptx
PDF
Why TechBuilder is the Future of Pickup and Delivery App Development (1).pdf
PPT
Introduction Database Management System for Course Database
PPTX
Introduction to Artificial Intelligence
PDF
PTS Company Brochure 2025 (1).pdf.......
PDF
Internet Downloader Manager (IDM) Crack 6.42 Build 41
PDF
Adobe Premiere Pro 2025 (v24.5.0.057) Crack free
PDF
Digital Systems & Binary Numbers (comprehensive )
PDF
Claude Code: Everyone is a 10x Developer - A Comprehensive AI-Powered CLI Tool
PDF
How to Migrate SBCGlobal Email to Yahoo Easily
PPTX
L1 - Introduction to python Backend.pptx
PDF
How to Choose the Right IT Partner for Your Business in Malaysia
PDF
Upgrade and Innovation Strategies for SAP ERP Customers
PDF
Wondershare Filmora 15 Crack With Activation Key [2025
PDF
Design an Analysis of Algorithms I-SECS-1021-03
PDF
EN-Survey-Report-SAP-LeanIX-EA-Insights-2025.pdf
PPTX
Reimagine Home Health with the Power of Agentic AI​
2025 Textile ERP Trends: SAP, Odoo & Oracle
Designing Intelligence for the Shop Floor.pdf
ai tools demonstartion for schools and inter college
VVF-Customer-Presentation2025-Ver1.9.pptx
Why TechBuilder is the Future of Pickup and Delivery App Development (1).pdf
Introduction Database Management System for Course Database
Introduction to Artificial Intelligence
PTS Company Brochure 2025 (1).pdf.......
Internet Downloader Manager (IDM) Crack 6.42 Build 41
Adobe Premiere Pro 2025 (v24.5.0.057) Crack free
Digital Systems & Binary Numbers (comprehensive )
Claude Code: Everyone is a 10x Developer - A Comprehensive AI-Powered CLI Tool
How to Migrate SBCGlobal Email to Yahoo Easily
L1 - Introduction to python Backend.pptx
How to Choose the Right IT Partner for Your Business in Malaysia
Upgrade and Innovation Strategies for SAP ERP Customers
Wondershare Filmora 15 Crack With Activation Key [2025
Design an Analysis of Algorithms I-SECS-1021-03
EN-Survey-Report-SAP-LeanIX-EA-Insights-2025.pdf
Reimagine Home Health with the Power of Agentic AI​

Game Balancing with Ecosystem Mechanism

  • 1. Game Balancing with Ecosystem Mechanism 1 Wen Xia, Bhojan Anand (presenter)
  • 3. Balance Control 3 Too easy Too hard Time/Skill Difficulty
  • 4. State of Art Approaches for Game Balancing Offline Strategies • Pre-defined behaviors for agents • Manual adjustment and update game versions • Data collection and offline learning • Quantitative methods • Etc. 4
  • 5. State of Art Approaches for Game Balancing Cont. Online/Real-time Strategies - DDA (most common) • Adaptive Behavior of Agents (power, size, attack frequency …) • PCG (Procedural content generation, eg more weapons, new weapons, buildings ) 5
  • 6. Online Strategies  Adaptive Behavior • Dynamic scripting • Real-time genetic control • PSO-ANN model • Etc. 6
  • 7. Online Strategies Cont.  PCG • Experience-driven: personalized content • Search-based: generate and test • Learning-based: modeling game with content features • Etc. 7
  • 8. Problems  Evolution on behavior and content simultaneously is seldom considered • Evolving either one will influence the performance of the other one in total outcome. • Integration of different learning strategies will cause developers extra work.  Diversity on results is ignored • Diversity on contents and behaviors creates curiosity and fun. • Players usually do not notice the little difference among “best solution” and “good solutions”.  Analyzing player-log causes heavy workload. • Mapping from player-log to features, and from features to game content causes too much manual work. 8
  • 11. Ecosystem & Game Environment 11  Similarities • Diversity of agents with simple behavior rules. • Different types of agents may have strength gaps. • Need to be balanced.  Differences • Ecosystem agents have complicated relationships among each other. • Game agents mostly interact with player’s character.
  • 12. Ecosystem Macro-mechanism  Balance point • Resource limitation  Cycle chain • Different species play different roles  Diversification • Same population, not always same behavior 12
  • 13. Ecosystem Macro-mechanism Cont.  Balance point • Resource limitation  Cycle chain • Different species play different roles  Diversification • Same population, not always same behavior 13 Total balance goal Individual balance goals Action weight diversity
  • 14. Ecosystem Micro-mechanism  Simple behavior for one individual • What to eat, how to move, etc.  Local perception • Self knowledge, neighbor communication  Evolution • Next generations will be evolved 14
  • 15. Ecosystem Micro-mechanism Cont.  Simple behavior for one individual • What to eat, how to move, etc.  Local perception • Self knowledge, neighbor communication  Evolution • Next generations will be evolved 15 Swarm intelligence
  • 16. Architecture Particle containing Weights [Number and types of Game Objects] 16 (Behaviour) (Content) Single Swarm Swarm Particle containing behaviour parameters of a game object in the swarm Weight * EIM – Ecosystem Implementing Model (a Swarm)
  • 17. Diversity Rationale  All swarms and agents (objects) have random initial positions (including newly born agents).  Different swarms may result in different local optima.  Stable agent will randomly mutate (low possibility). 17 Evolution Space
  • 18. Diversity Rationale Cont.  Example of one set of weights of “bot” agent • 3 swarms, 3 action weights • Agents within same swarm have similar but not same weights • Some swarms contain randomly mutated agents (6th of top, 1st of bottom) 18
  • 19. Learning Flow n, k are manually settled. Stochastic environment → n++ Large action base → k++ 19 * Goals in Level1 and Level 2 should be aligned with the ‘Game Balancing’ factor
  • 20. Goals  Player’s Power Variables • Objective variables: HP, armor, weapon strength, character level, etc. • Subjective variables: proficiency, habit, emotional condition, etc. • Networking variables: team cooperation/competition.  System Requirements • Set goals within evolution range manually. • Balancing Goal (eg, based on objective variables) • Full understanding of game content/agents/actions. 20 Future works Current goal
  • 21. Test Implementation & Evaluation 21
  • 22. Test Scene 22 Numerical Settings Demo Game settings HP remain Agent number of each types Learning record Individual performanceGame settings
  • 23. Evaluation Objectives 1) Adaption We set different maximum HP and armor of the player, in order to prove that our approach is able to adapt when the properties of the player change. 2) Diversification We need to prove that the results of behaviours and content types will not always be similar if we repeat same test cases again. 23
  • 24. Adaptivity Results (armor)  Final error: ± 5%  Passed Results: 8  Failed Results: 1 • 1B • Caused by early convergence of PSO 24 Repeated, A-C have same conditions
  • 25. Adaptivity Results (HP)  Final error: ± 5%  Passed Results: 9  Observation: higher HP, more stable learning curve • Same amount of object number change, less influence on total error. 25
  • 26. Diversity Results (content & behavior) 26
  • 27. Complete Results of 18 Tests https://docs.google.com/spreadsheets/d/1U05mqhuwOBv71ieFPqBjkTK2JHE04w 1fqc6TcJNSwx4/edit#gid=229731456 27
  • 28. Disadvantages & Future Work  Not so fast • Everytime when updated, need to test again by at least one iteration/generation to collect feedback from game environment.  Large error in first several iteration/generations • Player might feel weird.  Cannot handle aggregation • Advanced learning algorithm might support. 28
  • 30. MORE SLIDES & DEMO – for Q&A Support! 30
  • 31. Test Settings for Learning Component 31
  • 32. Test Settings for Game Objects 32
  • 34. Balance Goals 34 Bots 30 damage / sec Ghost 60 damage / sec Succubus 110 damage / sec Player’s HP at the end of one generation 0 Back to Test Scene
  • 35. Demo – Game Balancing
  • 36. v: velocity (disposition), x: position (state), p: best position, t: current time step, i: individual, g: neighbor, ω: inertia weight, c: cognitive weight, r: random number ∈ (0, 1) Particle Swarm Optimization (PSO) 36
  • 38. Variances of PSO Multi-swarm Optimization Hierarchical Swarm Optimization Waves of Swarm Particles ... 38 Back to System Design

Editor's Notes

  • #5: Pre-defined behaviors for agents - menu system ‘novice’, ‘player’, ‘expert’ - multiple levels (increasing difficulty….) Manual adjustment and update game versions - address balancing issues based on comments in new versions Data collection and offline learning - learn about player and tune the next version of game Quantitative methods - check the score and adapt difficulty according to score Etc.
  • #17: The system design is constructed into two levels. The level 1 is for individual EIMs, which aims at evolving individual behaviors. The level 2 is for EIM manager, which aims at evolving the numbers and types of game objects. The EIM stands for ecosystem implementing model. One EIM contains one type of game objects, and it works as a swarm where the parameters that control behaviors of game objects are its particles. In level 1, same type of game objects might be splitted into different EIMs, such that even though they have same optimization goals, the final outcome could be totally different. In level 2, there is only one swarm, of which every particle inside contains one set of the numbers that correspond to the numbers of different types of game objects.
  • #20: One learning cycle of individual EIMs is one iteration, and one learning cycle of EIM manager is one generation. One generation contains n iterations. And k is used to trigger the learning process of EIM manager. Because when the individual behavior is too noisy, there is meaning to evolve the number or type of game objects. If the game environment is very stochastic, n needs to be high, because it needs a longer period to reduce the possibility of extreme data. And if the action base of a game object is very large, k needs to be high, because the larger the searching space is, with the same acceptable error range, the longer time it needs for the particles to gather around the optimized value. Besides, currently, both n and k are manually settled. {Balancing Goal: - The balancing goal of enemies is to attack the player such that when one game round (one generation) ends, the HP of player will become 0. NOTE: one generation has a fixed time duration (in test cases: 10 secs x 10 iterations = 100 secs) - Bots (30 damage / sec) - Ghost (60 damage / sec) Succubus (110 damage / sec) }
  • #21: {eg. Balancing Goal: - The balancing goal of enemies is to attack the player such that when one game round (one generation) ends, the HP of player will become 0. NOTE: one generation has a fixed time duration (in test cases: 10 secs x 10 iterations = 100 secs) - Bots (30 damage / sec) - Ghost (60 damage / sec) Succubus (110 damage / sec) } Evaluation objectives: 1) Adaption We set different maximum HP and armor of the player, in order to prove that our approach is able to adapt when the properties of the player change. 2) Diversification We need to prove that the results of behaviours and content types will not always be similar if we repeat same test cases again.