SlideShare a Scribd company logo
Towards Automatic StarCraft
Strategy Generation Using Genetic
Programming
Pablo García Sánchez, Alberto Tonda,
Antonio M. Mora, Giovanni Squillero, J.J. Merelo
University of Granada, Spain
Politecnico di Torino, Italy
INRA, France
Objective
Automatic generation of non-
human strategies for RTS, to
explore new possible game
plans.
Objective
Automatic generation of non-
human strategies for RTS, to
explore new possible game
plans.
Also, it’s F***ING
COOL!
Outline
● Introduction
● Framework Description
● Experimental Setting
● Results
● Conclusions and Future work
Introduction
● Starcraft: de-facto testbed for RTS research
● Motivation: generate high-level strategies
using Genetic Programming
● Two different fitness functions used: victory-
based and report-based.
● Validation against bots not used during
evaluation.
Framework description: overview
Framework description: evaluator
Framework description: Individual
Framework description: fitness
● Victory-based: lexicographic victory counts using the
default final score returned by StarCraft at the end of
one match against each of 12 different opponents
(divided in 3 tiers).
● Report-based: a more complex metric, aiming at
separating military success from in-game economy
development, using different rewards, computed after 3
matches against each of 4 different opponents.
Experimental Setting: Parameters
Parameter Value Meaning
μ 30 Population size
λ 30 Number of genetic operators applied at each generation
σ 0.9 Initial strength of the mutation operators
α 0.9 Inertia of the self-adapting process
τ (1,4) Size of the tournament selection
MG 30 Stop condition: maximum number of generations
Experimental setting: parrying bots
● Race evolved: Zerg
● Bot used as base to evolve the high level strategy: OpprimoBot
● Victory-based fitness hand-coded enemies:
○ Tier 1 (Easy): TerranDummy, ProtossReaverDrop1B, ProtossDefensive, ZergHydraMuta
○ Tier 2 (Medium): OBProtossTemplarRush, ProtossReaverDrop, TerranDefensiveFB,
TerranDefensive
○ Tier 3 (Hard): ZergLurkerRush, TerranWraithHarass, TerranPush, TerranMarineRush
● Report-based fitness hand-coded enemies:
○ TerranDummy, ProtossReaverDrop, TerranWraithHarass, ZergLurkerRush (repeating 3
times against each strategy).
Results: best individuals
Best individual Average of population
Tier 3 4 1.73
Tier 2 3 1.83
Tier 1 3 2.93
Score ratio 0.0481 0.0378
Best individual Average of population
Military Victories 3 1.76
Economic victories 1 1.93
Relative destruction 400245 358172
Time to loss 1120 1380.7
Relative Economy 0.309 0.501
Victory-based fitness
Report-based fitness
Results: Validation of best bots
Vs. Bot Victory-based Report-based
OBTerranDefensiveFB 7 1
OBProtossTemplarRush 4 8
OBZergHydraMuta 10 1
OBZergLurkerRush 8 0
OBProtossDefensive 8 5
OBProtossReaverDrop1B 5 1
OBTerranDefensive 5 1
OBProtossReaverDrop 3 6
OBTerranMarineRush 7 0
OBTerranWraithHarass 5 0
OBTerranPush 6 3
OBTerranDummy 10 10
Victory-Based * 8
Report-Based 2 *
OpprimoBot 6 1
TOTAL 86 of 140 45 of 140
(Open) Challenges
● Evaluation time
○ Individuals compared vs many bots on many maps
○ Two virtual machines per game (!)
○ Even with limitless resources, 10’ per generation
● Rules and actions
○ Level of abstraction (meta-strategy, strategy, tactic)
○ Constraining the GP, or giving it more freedom?
● Bot generality
○ How to avoid overfitting on one map/vs one race?
○ Many opponents! But evaluation time increases
Conclusions
● Our framework has been able to generate
high-level strategies that outperform (naive)
human-coded ones
● Victory-based fitness outperforms Report-
based fitness
● Future work:
○ more evaluations, maps, races (Protoss and
Terrans), co-evolution, machine learning, map
analysis.
Thank you!

More Related Content

PDF
Evolutionary Deckbuilding in Hearthstone
PDF
This was a triumph: Evolving intelligent bots for videogames. And for Science.
PDF
Modelling Human Expert Behaviour in an Unreal Tournament 2004 Bot
PDF
Designing and Evolving an Unreal Tournament 2004 Expert Bot
PDF
Science and Videogames. Computational intelligence in videogames
PDF
Mathematical support for preventive maintenance periodicity optimization of r...
PDF
Data mining in security: Ja'far Alqatawna
PDF
Benchmarking languages for evolutionary algorithms
Evolutionary Deckbuilding in Hearthstone
This was a triumph: Evolving intelligent bots for videogames. And for Science.
Modelling Human Expert Behaviour in an Unreal Tournament 2004 Bot
Designing and Evolving an Unreal Tournament 2004 Expert Bot
Science and Videogames. Computational intelligence in videogames
Mathematical support for preventive maintenance periodicity optimization of r...
Data mining in security: Ja'far Alqatawna
Benchmarking languages for evolutionary algorithms

Viewers also liked (7)

ODP
Benchmarking languages for evolutionary computation
PPTX
MUSES WP5 Final Conclusions
PDF
Ciencia y videojuegos (versión Extracción de Información) [UCA 05/2015]
PDF
Ejemplos de investigación en videojuegos
PDF
Hackahton smart cities 2016 (Mayo 2016)
PDF
Smart city hackathon
PDF
Open dataday hackathon conclusiones
Benchmarking languages for evolutionary computation
MUSES WP5 Final Conclusions
Ciencia y videojuegos (versión Extracción de Información) [UCA 05/2015]
Ejemplos de investigación en videojuegos
Hackahton smart cities 2016 (Mayo 2016)
Smart city hackathon
Open dataday hackathon conclusiones
Ad

Recently uploaded (20)

PPTX
Detection-First SIEM: Rule Types, Dashboards, and Threat-Informed Strategy
PDF
Network Security Unit 5.pdf for BCA BBA.
PDF
Mobile App Security Testing_ A Comprehensive Guide.pdf
PDF
NewMind AI Weekly Chronicles - August'25 Week I
PDF
Peak of Data & AI Encore- AI for Metadata and Smarter Workflows
PPTX
Effective Security Operations Center (SOC) A Modern, Strategic, and Threat-In...
PDF
Spectral efficient network and resource selection model in 5G networks
PPTX
Big Data Technologies - Introduction.pptx
PDF
Optimiser vos workloads AI/ML sur Amazon EC2 et AWS Graviton
PDF
Chapter 3 Spatial Domain Image Processing.pdf
PPTX
Understanding_Digital_Forensics_Presentation.pptx
PPTX
VMware vSphere Foundation How to Sell Presentation-Ver1.4-2-14-2024.pptx
PDF
Machine learning based COVID-19 study performance prediction
PDF
Agricultural_Statistics_at_a_Glance_2022_0.pdf
PDF
TokAI - TikTok AI Agent : The First AI Application That Analyzes 10,000+ Vira...
PDF
Architecting across the Boundaries of two Complex Domains - Healthcare & Tech...
PDF
cuic standard and advanced reporting.pdf
PPTX
Spectroscopy.pptx food analysis technology
PDF
Building Integrated photovoltaic BIPV_UPV.pdf
PDF
Review of recent advances in non-invasive hemoglobin estimation
Detection-First SIEM: Rule Types, Dashboards, and Threat-Informed Strategy
Network Security Unit 5.pdf for BCA BBA.
Mobile App Security Testing_ A Comprehensive Guide.pdf
NewMind AI Weekly Chronicles - August'25 Week I
Peak of Data & AI Encore- AI for Metadata and Smarter Workflows
Effective Security Operations Center (SOC) A Modern, Strategic, and Threat-In...
Spectral efficient network and resource selection model in 5G networks
Big Data Technologies - Introduction.pptx
Optimiser vos workloads AI/ML sur Amazon EC2 et AWS Graviton
Chapter 3 Spatial Domain Image Processing.pdf
Understanding_Digital_Forensics_Presentation.pptx
VMware vSphere Foundation How to Sell Presentation-Ver1.4-2-14-2024.pptx
Machine learning based COVID-19 study performance prediction
Agricultural_Statistics_at_a_Glance_2022_0.pdf
TokAI - TikTok AI Agent : The First AI Application That Analyzes 10,000+ Vira...
Architecting across the Boundaries of two Complex Domains - Healthcare & Tech...
cuic standard and advanced reporting.pdf
Spectroscopy.pptx food analysis technology
Building Integrated photovoltaic BIPV_UPV.pdf
Review of recent advances in non-invasive hemoglobin estimation
Ad

Towards Automatic StarCraft Strategy Generation Using Genetic Programming

  • 1. Towards Automatic StarCraft Strategy Generation Using Genetic Programming Pablo García Sánchez, Alberto Tonda, Antonio M. Mora, Giovanni Squillero, J.J. Merelo University of Granada, Spain Politecnico di Torino, Italy INRA, France
  • 2. Objective Automatic generation of non- human strategies for RTS, to explore new possible game plans.
  • 3. Objective Automatic generation of non- human strategies for RTS, to explore new possible game plans. Also, it’s F***ING COOL!
  • 4. Outline ● Introduction ● Framework Description ● Experimental Setting ● Results ● Conclusions and Future work
  • 5. Introduction ● Starcraft: de-facto testbed for RTS research ● Motivation: generate high-level strategies using Genetic Programming ● Two different fitness functions used: victory- based and report-based. ● Validation against bots not used during evaluation.
  • 9. Framework description: fitness ● Victory-based: lexicographic victory counts using the default final score returned by StarCraft at the end of one match against each of 12 different opponents (divided in 3 tiers). ● Report-based: a more complex metric, aiming at separating military success from in-game economy development, using different rewards, computed after 3 matches against each of 4 different opponents.
  • 10. Experimental Setting: Parameters Parameter Value Meaning μ 30 Population size λ 30 Number of genetic operators applied at each generation σ 0.9 Initial strength of the mutation operators α 0.9 Inertia of the self-adapting process τ (1,4) Size of the tournament selection MG 30 Stop condition: maximum number of generations
  • 11. Experimental setting: parrying bots ● Race evolved: Zerg ● Bot used as base to evolve the high level strategy: OpprimoBot ● Victory-based fitness hand-coded enemies: ○ Tier 1 (Easy): TerranDummy, ProtossReaverDrop1B, ProtossDefensive, ZergHydraMuta ○ Tier 2 (Medium): OBProtossTemplarRush, ProtossReaverDrop, TerranDefensiveFB, TerranDefensive ○ Tier 3 (Hard): ZergLurkerRush, TerranWraithHarass, TerranPush, TerranMarineRush ● Report-based fitness hand-coded enemies: ○ TerranDummy, ProtossReaverDrop, TerranWraithHarass, ZergLurkerRush (repeating 3 times against each strategy).
  • 12. Results: best individuals Best individual Average of population Tier 3 4 1.73 Tier 2 3 1.83 Tier 1 3 2.93 Score ratio 0.0481 0.0378 Best individual Average of population Military Victories 3 1.76 Economic victories 1 1.93 Relative destruction 400245 358172 Time to loss 1120 1380.7 Relative Economy 0.309 0.501 Victory-based fitness Report-based fitness
  • 13. Results: Validation of best bots Vs. Bot Victory-based Report-based OBTerranDefensiveFB 7 1 OBProtossTemplarRush 4 8 OBZergHydraMuta 10 1 OBZergLurkerRush 8 0 OBProtossDefensive 8 5 OBProtossReaverDrop1B 5 1 OBTerranDefensive 5 1 OBProtossReaverDrop 3 6 OBTerranMarineRush 7 0 OBTerranWraithHarass 5 0 OBTerranPush 6 3 OBTerranDummy 10 10 Victory-Based * 8 Report-Based 2 * OpprimoBot 6 1 TOTAL 86 of 140 45 of 140
  • 14. (Open) Challenges ● Evaluation time ○ Individuals compared vs many bots on many maps ○ Two virtual machines per game (!) ○ Even with limitless resources, 10’ per generation ● Rules and actions ○ Level of abstraction (meta-strategy, strategy, tactic) ○ Constraining the GP, or giving it more freedom? ● Bot generality ○ How to avoid overfitting on one map/vs one race? ○ Many opponents! But evaluation time increases
  • 15. Conclusions ● Our framework has been able to generate high-level strategies that outperform (naive) human-coded ones ● Victory-based fitness outperforms Report- based fitness ● Future work: ○ more evaluations, maps, races (Protoss and Terrans), co-evolution, machine learning, map analysis.