A number of reinforcement learning algorithms have been developed that are guaranteed to converge to an optimal solution for look-up tables. However, it has also been shown that these algorithms become unstable when used directly with a function approximation system. A new class of algorithms developed by Baird (1995) were created to handle the problem that direct algorithms have with function approximation systems. This thesis focused on extending Baird's work further by comparing the performance of the residual algorithm against direct application of the Temporal Difference learning algorithm. Four benchmark experiments were used to test each algorithm with various values of lambda and alpha over a period of twenty trials. Overall it was shown that the residual algorithm outperformed direct application of the TD learning algorithm on all four experiments.