posted on 2023-05-26, 07:47authored byKamenetsky, D
Over the past two decades, Reinforcement Learning has emerged as a promising Machine Learning technique that is capable of solving complex dynamic problems. The benefit of this technique lies in the fact that the agent learns from its experience rather than being told directly. For problems with large state-spaces, Reinforcement Learning algorithms are combined with function approximation techniques, such as neural networks. The architecture of the neural networks plays a significant role in the agent's learning. Past research has demonstrated that networks with a constructive architecture outperform those with a fixed architecture on some benchmark problems. This study compares the performance of these two architectures in Othello - a complex deterministic board game. Three networks are used in the comparison: two with constructive architecture - Cascade and Resource Allocating Network, and one with fixed architecture - Multilayer Perceptron. Investigation is also made with respect to input representation, number of hidden nodes and other parameters used by the networks. Training is performed with both on-policy (Sarsa) and off-policy (Q-Learning) algorithms. Results show that agents were able to learn the positional strategy (novice strategy in Othello) and could beat each of the three built-in opponents. Agents trained with Multilayer Perceptron perform better, but converge slower than those trained with Cascade.