SlideShare a Scribd company logo
2
Most read
5
Most read
6
Most read
Application of Monte-
Carlo Tree Search in a
Fighting Game AI
Shubu Yoshida, Makoto Ishihara, Taichi Miyazaki,
Yuto Nakagawa, Tomohiro Harada, and Ruck Thawonmas
Intelligent Computer Entertainment Laboratory
Ritsumeikan University
Outline
1.Background of this research
2.Monte-Carlo Tree Search
3.Monte-Carlo Tree Search for a Fighting Game
4.Experimental Environment
5.Experimental Method
6.Result
7.Competition result in 2016
8.Conclusion
Background (1/2)
A Fighting Game AI Competition is held every year [1]
High-ranking AIs = Rule-based (until 2015)
Rule-based : a same action in a same situation
Human player can easily predict the AI’s action patterns and
outsmart it
[1] http://www.ice.ci.ritsumei.ac.jp/~ftgaic/
Background (2/2)
 Apply the Monte-Carlo Tree Search (MCTS)
to a fighting game AI
 Decides a next own action by stochastic simulations
 Already successful in many games [2][3]
We evaluate the effectiveness of MCTS on a fighting game
[2] S. Gelly, et al. ”The Grand Challenge of Computer Go: Monte Carlo Tree Search and Extensions”, Communications of the ACM, Vol. 55, No. 3, pp. 106-113,
2012.
[3] N. Ikehata and T. Ito. ”Monte-carlo tree search in ms. pac-man”. In Computational Intelligence and Games (CIG), 2011 IEEE Conference on, pp. 39-46, 2011
Monte-Carlo Tree Search (1/5)
selection simulation backpropagation
repeat until the set time has elapsed
expansion
Monte-Carlo Tree Search (2/5)
selection simulation backpropagation
repeat until the set time has elapsed
expansion
Formula of UCB1
・ 𝑋𝑖 : the value of an average reward
・𝐶 : The balance parameter
・𝑁𝑖
𝑝
: The total number of times the parent node of node 𝑖 has been visited
・𝑁𝑖 : The total number of times node 𝑖 has been visited
𝑈𝐶𝐵1𝑖 = 𝑋𝑖 + 𝐶
2 ln 𝑁𝑖
𝑝
𝑁𝑖
Preferentially select
a child node that has
been visited less
The evaluation valueExploitation
Exploration
Monte-Carlo Tree Search (3/5)
selection simulation backpropagation
repeat until the set time has elapsed
expansion
Monte-Carlo Tree Search (4/5)
selection simulation backpropagation
repeat until the set time has elapsed
expansion
Monte-Carlo Tree Search (5/5)
selection simulation backpropagation
repeat until the set time has elapsed
expansion
MCTS for a Fighting Game (1/2)
𝑈𝐶𝐵1𝑖 = 𝑋𝑖 + 𝐶
2 ln 𝑁𝑖
𝑝
𝑁𝑖
𝑋𝑖 =
1
𝑁𝑖
𝑗=1
𝑁 𝑖
𝑒𝑣𝑎𝑙𝑗
𝑒𝑣𝑎𝑙𝑗 = (𝑎𝑓𝑡𝑒𝑟𝐻𝑃𝑗
𝑚𝑦
− 𝑏𝑒𝑓𝑜𝑟𝑒𝐻𝑃𝑗
𝑚𝑦
)
−(𝑎𝑓𝑡𝑒𝑟𝐻𝑃𝑗
𝑜𝑝𝑝
− 𝑏𝑒𝑓𝑜𝑟𝑒𝐻𝑃𝑗
𝑜𝑝𝑝
)
MCTS for a Fighting Game (2/2)
・・・
Expansion
normal fighting game
・・・
・・・・・
Simulation
Experimental Environment
FightingICE
Used as the platform of international fighting game AI competition
1 game : 3 rounds
-1 round : 60 second
𝑚𝑦𝑆𝑐𝑜𝑟𝑒 =
𝑜𝑝𝑝𝐻𝑃
𝑚𝑦𝐻𝑃+𝑜𝑝𝑝𝐻𝑃
× 1000
Response time : 16.67ms
Experimental Method
MCTSAI(AI applying MCTS) vs high ranking 5 AIs of 2015
tournament
5 AIs : Rule-based
100 games (50 games each side)
TABLE I THE PARAMETERS USED IN THE EXPERIMENTS
Notations Meanings Values
C Balance parameter 3
Threshold of the number of visits 10
Threshold of the depth of tree 2
The number of simulations 60 frames
𝑁 𝑚𝑎𝑥
𝐷 𝑚𝑎𝑥
𝑇𝑠𝑖𝑚
Result (1/5)
0
100
200
300
400
500
600
700
800
Machete Ni1mir4ri Jay_Bot RatioBot AI128200
Score
vs AI names
Fig. 1. The average scores against high ranking 5 AIs of 2015 tournament
Result (2/5)
0
100
200
300
400
500
600
700
800
Machete Ni1mir4ri Jay_Bot RatioBot AI128200
Score
vs AI names
Fig. 1. The average scores against high ranking 5 AIs of 2015 tournament
Result (3/5)
P1 : MCTSAI P2 : RatioBot
Result (4/5)
0
100
200
300
400
500
600
700
800
Machete Ni1mir4ri Jay_Bot RatioBot AI128200
Score
vs AI names
Fig. 1. The average scores against high ranking 5 AIs of 2015 tournament
Result (5/5)
P1 : MCTSAI P2 : Machete
Competition result in 2016
Orange 1st
Blue 2nd
Green 3rd
Total Rank
RANK
BANZAI 11
DragonSurvivor 12
iaTest 7
IchibanChan 9
JayBot2016 5
KeepYourDistanceBot 10
MctsAi 3
MrAsh 4
Poring 8
Ranezi 2
Snorkel 13
Thunder01 1
Tomatensimulator 6
Triump 14
Conclusion
Applied MCTS to fighting game AI
Showed that MCTS in fighting game AI is effective
Future work
In fighting game, random simulation of the enemy behavior is
not effective
Predict the behavior of the enemy and use this information in
simulation
Thank you for listening

More Related Content

PPTX
2. forward chaining and backward chaining
PPTX
Tsp branch and-bound
PDF
Machine learning Algorithms
PPTX
Artificial Intelligence Searching Techniques
PPTX
Knapsack problem using greedy approach
PPT
5 csp
PPTX
First order logic
PPT
Artificial Intelligence -- Search Algorithms
2. forward chaining and backward chaining
Tsp branch and-bound
Machine learning Algorithms
Artificial Intelligence Searching Techniques
Knapsack problem using greedy approach
5 csp
First order logic
Artificial Intelligence -- Search Algorithms

What's hot (20)

PPTX
Planning
PPTX
Long Short Term Memory LSTM
PPTX
[Paper Reading] Attention is All You Need
PPTX
AI_Session 25 classical planning.pptx
PDF
Introduction to Neural Networks
PPTX
Uncertain Knowledge and Reasoning in Artificial Intelligence
PDF
Lec3 dqn
PPTX
AI_Session 21 First order logic.pptx
PPTX
Lecture optimal binary search tree
PDF
If then rule in fuzzy logic and fuzzy implications
PPT
Supervised and unsupervised learning
PDF
Time series deep learning
PDF
Monte Carlo Tree Search for the Super Mario Bros
PPTX
Basic operators in matlab
PDF
AI 10 | Naive Bayes Classifier
PDF
Artificial Neural Networks Lect5: Multi-Layer Perceptron & Backpropagation
PPTX
Probabilistic Reasoning
PPTX
AI_Session 11: searching with Non-Deterministic Actions and partial observati...
PPTX
Unification and Lifting
PPTX
Supervised learning and Unsupervised learning
Planning
Long Short Term Memory LSTM
[Paper Reading] Attention is All You Need
AI_Session 25 classical planning.pptx
Introduction to Neural Networks
Uncertain Knowledge and Reasoning in Artificial Intelligence
Lec3 dqn
AI_Session 21 First order logic.pptx
Lecture optimal binary search tree
If then rule in fuzzy logic and fuzzy implications
Supervised and unsupervised learning
Time series deep learning
Monte Carlo Tree Search for the Super Mario Bros
Basic operators in matlab
AI 10 | Naive Bayes Classifier
Artificial Neural Networks Lect5: Multi-Layer Perceptron & Backpropagation
Probabilistic Reasoning
AI_Session 11: searching with Non-Deterministic Actions and partial observati...
Unification and Lifting
Supervised learning and Unsupervised learning
Ad

Viewers also liked (17)

PPTX
Mcts ai
PDF
"Monte-Carlo Tree Search for the game of Go"
PPTX
2016 Fighting Game Artificial Intelligence Competition
PDF
What did AlphaGo do to beat the strongest human Go player?
PPTX
Challenges for implementing Monte Carlo Tree Search in commercial games
PPTX
Applying fuzzy control in fighting game ai
PDF
Alpha go 16110226_김영우
PDF
Monte carlo tree search
PDF
A Markov Chain Monte Carlo approach to the Steiner Tree Problem in water netw...
PPTX
2013 Fighting Game Artificial Intelligence Competition
PDF
Bayesian intro
PPTX
AlphaGo: An AI Go player based on deep neural networks and monte carlo tree s...
PPT
Bayesian statistics using r intro
PPTX
An introduction to bayesian statistics
PDF
Introduction to Bayesian Methods
PPTX
8 queens problem using back tracking
PDF
AlphaGo 알고리즘 요약
Mcts ai
"Monte-Carlo Tree Search for the game of Go"
2016 Fighting Game Artificial Intelligence Competition
What did AlphaGo do to beat the strongest human Go player?
Challenges for implementing Monte Carlo Tree Search in commercial games
Applying fuzzy control in fighting game ai
Alpha go 16110226_김영우
Monte carlo tree search
A Markov Chain Monte Carlo approach to the Steiner Tree Problem in water netw...
2013 Fighting Game Artificial Intelligence Competition
Bayesian intro
AlphaGo: An AI Go player based on deep neural networks and monte carlo tree s...
Bayesian statistics using r intro
An introduction to bayesian statistics
Introduction to Bayesian Methods
8 queens problem using back tracking
AlphaGo 알고리즘 요약
Ad

Similar to Application of Monte Carlo Tree Search in a Fighting Game AI (GCCE 2016) (20)

PDF
Deep Learning for Real-Time Atari Game Play Using Offline Monte-Carlo Tree Se...
PPTX
AI3391 Artificial Intelligence Session 18 Monto carlo search tree.pptx
PDF
Testing hybrid computational intelligence algorithms for general game playing...
PDF
CoSECiVi 2020 - Parametric Action Pre-Selection for MCTS in Real-Time Strateg...
PPTX
2018 Fighting Game AI Competition
PDF
GAME THEORY AND MONTE CARLO SEARCH SPACE TREE
PPTX
AlphaZero: A General Reinforcement Learning Algorithm that Masters Chess, Sho...
ODP
Monte Carlo Tree Search in 2014 (MCMC days in Marseille)
PPTX
CptS 440/ 540 AI.pptx
PDF
Improving the Performance of MCTS-Based μRTS Agents Through Move Pruning
ODP
Disappointing results & open problems in Monte-Carlo Tree Search
ODP
Silverdisappointing8 120924091642-phpapp01
PPTX
2019 Fighting Game AI Competition
PPTX
2017 Fighting Game AI Competition
ODP
Simulation-based optimization: Upper Confidence Tree and Direct Policy Search
PDF
Introduction to Alphago Zero
PPTX
Module_3_1.pptx
PPTX
AlphaGo Zero: Mastering the Game of Go Without Human Knowledge
PPT
Adversarial Search and Game-Playing .ppt
PPT
1.game
Deep Learning for Real-Time Atari Game Play Using Offline Monte-Carlo Tree Se...
AI3391 Artificial Intelligence Session 18 Monto carlo search tree.pptx
Testing hybrid computational intelligence algorithms for general game playing...
CoSECiVi 2020 - Parametric Action Pre-Selection for MCTS in Real-Time Strateg...
2018 Fighting Game AI Competition
GAME THEORY AND MONTE CARLO SEARCH SPACE TREE
AlphaZero: A General Reinforcement Learning Algorithm that Masters Chess, Sho...
Monte Carlo Tree Search in 2014 (MCMC days in Marseille)
CptS 440/ 540 AI.pptx
Improving the Performance of MCTS-Based μRTS Agents Through Move Pruning
Disappointing results & open problems in Monte-Carlo Tree Search
Silverdisappointing8 120924091642-phpapp01
2019 Fighting Game AI Competition
2017 Fighting Game AI Competition
Simulation-based optimization: Upper Confidence Tree and Direct Policy Search
Introduction to Alphago Zero
Module_3_1.pptx
AlphaGo Zero: Mastering the Game of Go Without Human Knowledge
Adversarial Search and Game-Playing .ppt
1.game

Recently uploaded (20)

PDF
Architecting across the Boundaries of two Complex Domains - Healthcare & Tech...
PPTX
Spectroscopy.pptx food analysis technology
PDF
KodekX | Application Modernization Development
PDF
cuic standard and advanced reporting.pdf
PDF
Blue Purple Modern Animated Computer Science Presentation.pdf.pdf
PPTX
Big Data Technologies - Introduction.pptx
PDF
Electronic commerce courselecture one. Pdf
PDF
Mobile App Security Testing_ A Comprehensive Guide.pdf
PDF
TokAI - TikTok AI Agent : The First AI Application That Analyzes 10,000+ Vira...
PDF
Diabetes mellitus diagnosis method based random forest with bat algorithm
PDF
Network Security Unit 5.pdf for BCA BBA.
PDF
Dropbox Q2 2025 Financial Results & Investor Presentation
PPTX
VMware vSphere Foundation How to Sell Presentation-Ver1.4-2-14-2024.pptx
PDF
Spectral efficient network and resource selection model in 5G networks
PDF
Chapter 3 Spatial Domain Image Processing.pdf
PPTX
Cloud computing and distributed systems.
PDF
NewMind AI Weekly Chronicles - August'25 Week I
PPTX
Detection-First SIEM: Rule Types, Dashboards, and Threat-Informed Strategy
PDF
Encapsulation theory and applications.pdf
PDF
Unlocking AI with Model Context Protocol (MCP)
Architecting across the Boundaries of two Complex Domains - Healthcare & Tech...
Spectroscopy.pptx food analysis technology
KodekX | Application Modernization Development
cuic standard and advanced reporting.pdf
Blue Purple Modern Animated Computer Science Presentation.pdf.pdf
Big Data Technologies - Introduction.pptx
Electronic commerce courselecture one. Pdf
Mobile App Security Testing_ A Comprehensive Guide.pdf
TokAI - TikTok AI Agent : The First AI Application That Analyzes 10,000+ Vira...
Diabetes mellitus diagnosis method based random forest with bat algorithm
Network Security Unit 5.pdf for BCA BBA.
Dropbox Q2 2025 Financial Results & Investor Presentation
VMware vSphere Foundation How to Sell Presentation-Ver1.4-2-14-2024.pptx
Spectral efficient network and resource selection model in 5G networks
Chapter 3 Spatial Domain Image Processing.pdf
Cloud computing and distributed systems.
NewMind AI Weekly Chronicles - August'25 Week I
Detection-First SIEM: Rule Types, Dashboards, and Threat-Informed Strategy
Encapsulation theory and applications.pdf
Unlocking AI with Model Context Protocol (MCP)

Application of Monte Carlo Tree Search in a Fighting Game AI (GCCE 2016)

  • 1. Application of Monte- Carlo Tree Search in a Fighting Game AI Shubu Yoshida, Makoto Ishihara, Taichi Miyazaki, Yuto Nakagawa, Tomohiro Harada, and Ruck Thawonmas Intelligent Computer Entertainment Laboratory Ritsumeikan University
  • 2. Outline 1.Background of this research 2.Monte-Carlo Tree Search 3.Monte-Carlo Tree Search for a Fighting Game 4.Experimental Environment 5.Experimental Method 6.Result 7.Competition result in 2016 8.Conclusion
  • 3. Background (1/2) A Fighting Game AI Competition is held every year [1] High-ranking AIs = Rule-based (until 2015) Rule-based : a same action in a same situation Human player can easily predict the AI’s action patterns and outsmart it [1] http://www.ice.ci.ritsumei.ac.jp/~ftgaic/
  • 4. Background (2/2)  Apply the Monte-Carlo Tree Search (MCTS) to a fighting game AI  Decides a next own action by stochastic simulations  Already successful in many games [2][3] We evaluate the effectiveness of MCTS on a fighting game [2] S. Gelly, et al. ”The Grand Challenge of Computer Go: Monte Carlo Tree Search and Extensions”, Communications of the ACM, Vol. 55, No. 3, pp. 106-113, 2012. [3] N. Ikehata and T. Ito. ”Monte-carlo tree search in ms. pac-man”. In Computational Intelligence and Games (CIG), 2011 IEEE Conference on, pp. 39-46, 2011
  • 5. Monte-Carlo Tree Search (1/5) selection simulation backpropagation repeat until the set time has elapsed expansion
  • 6. Monte-Carlo Tree Search (2/5) selection simulation backpropagation repeat until the set time has elapsed expansion
  • 7. Formula of UCB1 ・ 𝑋𝑖 : the value of an average reward ・𝐶 : The balance parameter ・𝑁𝑖 𝑝 : The total number of times the parent node of node 𝑖 has been visited ・𝑁𝑖 : The total number of times node 𝑖 has been visited 𝑈𝐶𝐵1𝑖 = 𝑋𝑖 + 𝐶 2 ln 𝑁𝑖 𝑝 𝑁𝑖 Preferentially select a child node that has been visited less The evaluation valueExploitation Exploration
  • 8. Monte-Carlo Tree Search (3/5) selection simulation backpropagation repeat until the set time has elapsed expansion
  • 9. Monte-Carlo Tree Search (4/5) selection simulation backpropagation repeat until the set time has elapsed expansion
  • 10. Monte-Carlo Tree Search (5/5) selection simulation backpropagation repeat until the set time has elapsed expansion
  • 11. MCTS for a Fighting Game (1/2) 𝑈𝐶𝐵1𝑖 = 𝑋𝑖 + 𝐶 2 ln 𝑁𝑖 𝑝 𝑁𝑖 𝑋𝑖 = 1 𝑁𝑖 𝑗=1 𝑁 𝑖 𝑒𝑣𝑎𝑙𝑗 𝑒𝑣𝑎𝑙𝑗 = (𝑎𝑓𝑡𝑒𝑟𝐻𝑃𝑗 𝑚𝑦 − 𝑏𝑒𝑓𝑜𝑟𝑒𝐻𝑃𝑗 𝑚𝑦 ) −(𝑎𝑓𝑡𝑒𝑟𝐻𝑃𝑗 𝑜𝑝𝑝 − 𝑏𝑒𝑓𝑜𝑟𝑒𝐻𝑃𝑗 𝑜𝑝𝑝 )
  • 12. MCTS for a Fighting Game (2/2) ・・・ Expansion normal fighting game ・・・ ・・・・・ Simulation
  • 13. Experimental Environment FightingICE Used as the platform of international fighting game AI competition 1 game : 3 rounds -1 round : 60 second 𝑚𝑦𝑆𝑐𝑜𝑟𝑒 = 𝑜𝑝𝑝𝐻𝑃 𝑚𝑦𝐻𝑃+𝑜𝑝𝑝𝐻𝑃 × 1000 Response time : 16.67ms
  • 14. Experimental Method MCTSAI(AI applying MCTS) vs high ranking 5 AIs of 2015 tournament 5 AIs : Rule-based 100 games (50 games each side) TABLE I THE PARAMETERS USED IN THE EXPERIMENTS Notations Meanings Values C Balance parameter 3 Threshold of the number of visits 10 Threshold of the depth of tree 2 The number of simulations 60 frames 𝑁 𝑚𝑎𝑥 𝐷 𝑚𝑎𝑥 𝑇𝑠𝑖𝑚
  • 15. Result (1/5) 0 100 200 300 400 500 600 700 800 Machete Ni1mir4ri Jay_Bot RatioBot AI128200 Score vs AI names Fig. 1. The average scores against high ranking 5 AIs of 2015 tournament
  • 16. Result (2/5) 0 100 200 300 400 500 600 700 800 Machete Ni1mir4ri Jay_Bot RatioBot AI128200 Score vs AI names Fig. 1. The average scores against high ranking 5 AIs of 2015 tournament
  • 17. Result (3/5) P1 : MCTSAI P2 : RatioBot
  • 18. Result (4/5) 0 100 200 300 400 500 600 700 800 Machete Ni1mir4ri Jay_Bot RatioBot AI128200 Score vs AI names Fig. 1. The average scores against high ranking 5 AIs of 2015 tournament
  • 19. Result (5/5) P1 : MCTSAI P2 : Machete
  • 20. Competition result in 2016 Orange 1st Blue 2nd Green 3rd Total Rank RANK BANZAI 11 DragonSurvivor 12 iaTest 7 IchibanChan 9 JayBot2016 5 KeepYourDistanceBot 10 MctsAi 3 MrAsh 4 Poring 8 Ranezi 2 Snorkel 13 Thunder01 1 Tomatensimulator 6 Triump 14
  • 21. Conclusion Applied MCTS to fighting game AI Showed that MCTS in fighting game AI is effective Future work In fighting game, random simulation of the enemy behavior is not effective Predict the behavior of the enemy and use this information in simulation
  • 22. Thank you for listening

Editor's Notes

  • #2: Hello everyone. My name is shubu yoshida of Intelligent Computer Entertainment Lab, Ritsumeikan University. I’d like to talk about “Application of Monte-Carlo Tree Search in a Fighting Game AI” .
  • #3: This is the outline of my presentation. I’d like to talk about these contents.
  • #4: A Fighting Game AI Competition is held every year. In this competition, High-ranking AIs are mainly well-tuned rule-based AIs which always conduct a same action in a same situation. Rule-based AIs take predetermined actions. Human players can easily predict the AI’s action patterns and outsmart it. And if the parameters of the action changed, Rule-based AI’s strength be changed
  • #5: In order to solve this problem, we apply MCTS to a Fighting Game AI. MCTS decides a next own action by stochastic simulations. MCTS based approach produces a significantly promising result not only in a board game like Go [2], but also in a realtime based game like Ms.Pac-Man [3]. Then, it is expected that it performs better in a fighting game because this kind of game is similar to Ms.Pac-Man in terms of real-time based. it is expected that it performs better in a fighting game. In this paper, we evaluate the effectiveness of MCTS on a fighting game.
  • #6: We modified traditional MCTS for a fighting game. This figure is an overview of traditional MCTS. I’ll explain you about this. And after having explained this, I’ll explain you about MCTS for fighting game. MCTS combines the game tree search and the Monte Carlo method. Each node represents a state of the game. Each edge an action.
  • #7: First, MCTS selects the child node with the highest UCB1 value until it reaches a leaf node. Each child node has a UCB1 value.
  • #8: UCB1 value is calculated by this formula. In this formula, the first term is the evaluation value. The second term aims that MCTS preferentially selects a child node that has been visited less. So, this formula aims that MCTS selects a child node which not only has high evaluation value but also has been visited less to prevent local search. In short, the first term is exploitation and the second term is exploration.
  • #9: Second, after arriving at a leaf node, if its number of visits exceeds a pre-defined threshold and the depth of the tree has not reached the upper limit, MCTS will create child nodes from it.
  • #10: Third, it performs random simulation from the root node to the leaf node. And it simulate until the end of game. In this part, opponent actions are selected randomly and my actions are used in that path. After do these actions, we get reward and state.
  • #11: Finally, it propagates a result of simulation from the leaf node to the parent node and calculates UCB1 values and repeat propagation until the root node. The above 4 steps are repeated during allowed time budget in MCTS. Then, the child node is chosen with the highest number of visits from the root node.
  • #12: In fighting games, UCB1 is defined by this formula. The evaluation value of node 𝑖 is the average value of the amount of the opponent character's hit-point changes subtracted by the amount of that of the player character . This value is higher when my AI gives a lot of damage to the opponent and it is not damaged by the opponent. Each parameter shows AI HP before and after j-th simulation. The first term is the own score difference term before and after a simulation. The second term is the opponent’s one.
  • #13: In the expansion part, traditional MCTS expands only one node at a time. In this paper, we expand all actions or nodes that the AI can act. Fighting games have a lot of actions, and real time games have search time limit. We want to explore all of the nodes once at least. So we expand all actions that the AI can act. In the simulation part, in board games, simulation is done until the end of the game. But real-time games have limited thinking time. So we put restrictions on tree depth. These are the main changes in MCTS for fighting games.
  • #14: In an experiment, We used FightingICE as the fighting game platform. FightingICE is a 2D fighting game developed by our laboratory for game ai researches. It is used as the platform of international fighting game AI competitions recognized by IEEE CIG. The player AI score or My score is calculated by this formula. If more than 500, my AI’s performance is superior to the opponent AI
  • #15: Next , experimental method. We let MCTSAI fight 100 times against high ranking 5 AIs of 2015 tournament, while switching each side . Action behaviors of each AI are rule-based. And we used these parameters.
  • #16: The average score against each AI is shown in Fig. 1. In this figure, the horizontal axis lists the name of high ranking AI. And from left to right, there are 1st ranked to 5th ranked Ais. The vertical axis represents the average scores of MCTSAI againtw high ranking Ais.
  • #17: From this result, the proposed AI outperformed all opponent AIs, except for the 1st ranked AI Machete.
  • #18: This video is a fighting game scene where P1 is MCTSAI and P2 is RatioBot. RatioBot is the 4th ranked ai in 2015 tournament. As we can see from this video, MCTSAI has been able to dodge the behavior of RatioBot. It can be said that the simulation of Monte Carlo tree search has been working well. So MCTS is an effective method in this fighting game. ////
  • #19: But the proposed AI did not show a good performance against Machete.
  • #20: This video is a fighting game scene where P1 is MCTSAI and P2 is Machete. Machete is a well tuned rule-based AI that repeatedly conducts short actions, requiring less number of frames, which are not well simulated by MCTS RANDOM simulation.
  • #21: This is the competition result in 2016. the horizontal axis lists the name of AI. And these numbers represent these AIs Ranking. In this competition, our MctsAI came 3rd. So it can be said that Mcts showed good results also in an actual tournament.
  • #22: In conclusion we applied MCTS to a fighting game AI. Results showed that MCTS in fighting game AI is effective. In this paper, we have found that random simulation of the enemy behavior is not effective in fighting games. So, in the future, we plan to add a mechanism such as behavior prediction of the enemy and use it in simulation. Use of this kind of mechanism should better simulate the opponent.