Policy or Value ? Loss Function and Playing Strength in AlphaZero-like Self-play

Por um escritor misterioso
Last updated 24 dezembro 2024
Policy or Value ? Loss Function and Playing Strength in AlphaZero-like  Self-play
Results indicate that, at least for relatively simple games such as 6x6 Othello and Connect Four, optimizing the sum, as AlphaZero does, performs consistently worse than other objectives, in particular by optimizing only the value loss. Recently, AlphaZero has achieved outstanding performance in playing Go, Chess, and Shogi. Players in AlphaZero consist of a combination of Monte Carlo Tree Search and a Deep Q-network, that is trained using self-play. The unified Deep Q-network has a policy-head and a value-head. In AlphaZero, during training, the optimization minimizes the sum of the policy loss and the value loss. However, it is not clear if and under which circumstances other formulations of the objective function are better. Therefore, in this paper, we perform experiments with combinations of these two optimization targets. Self-play is a computationally intensive method. By using small games, we are able to perform multiple test cases. We use a light-weight open source reimplementation of AlphaZero on two different games. We investigate optimizing the two targets independently, and also try different combinations (sum and product). Our results indicate that, at least for relatively simple games such as 6x6 Othello and Connect Four, optimizing the sum, as AlphaZero does, performs consistently worse than other objectives, in particular by optimizing only the value loss. Moreover, we find that care must be taken in computing the playing strength. Tournament Elo ratings differ from training Elo ratings—training Elo ratings, though cheap to compute and frequently reported, can be misleading and may lead to bias. It is currently not clear how these results transfer to more complex games and if there is a phase transition between our setting and the AlphaZero application to Go where the sum is seemingly the better choice.
Policy or Value ? Loss Function and Playing Strength in AlphaZero-like  Self-play
AlphaGo: How it works technically?, by Jonathan Hui
Policy or Value ? Loss Function and Playing Strength in AlphaZero-like  Self-play
AlphaZero: A General Reinforcement Learning Algorithm that Masters
Policy or Value ? Loss Function and Playing Strength in AlphaZero-like  Self-play
Reimagining Chess with AlphaZero, February 2022
Policy or Value ? Loss Function and Playing Strength in AlphaZero-like  Self-play
AlphaZero Explained · On AI
Policy or Value ? Loss Function and Playing Strength in AlphaZero-like  Self-play
🔵 AlphaZero Plays Connect 4
Policy or Value ? Loss Function and Playing Strength in AlphaZero-like  Self-play
Reimagining Chess with AlphaZero, February 2022
Policy or Value ? Loss Function and Playing Strength in AlphaZero-like  Self-play
Policy or Value ? Loss Function and Playing Strength in AlphaZero
Policy or Value ? Loss Function and Playing Strength in AlphaZero-like  Self-play
Reimagining Chess with AlphaZero, February 2022
Policy or Value ? Loss Function and Playing Strength in AlphaZero-like  Self-play
Warm-Start AlphaZero Self-play Search Enhancements
Policy or Value ? Loss Function and Playing Strength in AlphaZero-like  Self-play
Decaying Curves of with Different l. Every curve decays from 0.5
Policy or Value ? Loss Function and Playing Strength in AlphaZero-like  Self-play
reference request - How do neural networks play chess
Policy or Value ? Loss Function and Playing Strength in AlphaZero-like  Self-play
Adaptive Warm-Start MCTS in AlphaZero-Like Deep Reinforcement
Policy or Value ? Loss Function and Playing Strength in AlphaZero-like  Self-play
AlphaZero
Policy or Value ? Loss Function and Playing Strength in AlphaZero-like  Self-play
Reimagining Chess with AlphaZero, February 2022
Policy or Value ? Loss Function and Playing Strength in AlphaZero-like  Self-play
A general reinforcement learning algorithm that masters chess

© 2014-2024 startwindsor.com. All rights reserved.