University of Tasmania
Browse
- No file added yet -

Localisation and navigation : applying biological principles in mobile robotics

Download (7 MB)
thesis
posted on 2023-05-27, 17:51 authored by Ollington, R
Recently, there has been a significant effort to apply behavioural and anatomical studies ofhippocampal place learning in rodents and other animals to the problem of robot localisation and mapping. The stated purpose of these recent experiments is twofold. Firstly, it is hoped that a study of this material will lead to improved algorithms for mobile robotics. Secondly, the behaviour of these new algorithms may be studied to evaluate psychological theories, and aid in the development of new theories. This thesis builds on these experiments by developing a complete localisation and navigational system for a simulated mobile robot. In order to provide a complete and efficient system, several new algorithms were developed. Firstly, a method for preprocessing input was required, thus the adaptive response function neuron (ARFN) was developed. This neuronal model is able to identify similar input patterns, while discriminating between conceptually different sensory experiences. ARFNs learn a locally tuned response to input patterns, and are able to adapt the centre, width and shape of each input's response function on-line. These cells demonstrate one simple way that neurons in the cerebral cortex may learn a locally tuned response to input. Secondly, a place cell system was developed for localisation. The new system provides a simple technique for establishing place cell firing based on odometric information and the current view (as captured by ARFNs). This system enables the robot's position to be accurately estimated, even in the presence of random and systematic odometric errors. The main advantage of the new system is that it allows certain topological assumptions to be made a priori, thus accelE;rating the training of downstream navigational systems. This prior knowledge may help explain the dead reckoning abilities of some animals and provides new insights into the place cell system in general. Finally, a novel reinforcement learning algorithm was developed for goal independent navigation in complex environments. The new algorithm, called Concurrent Q-Learning (CQL), learns a value function for all goals simultaneously, and updates this value function more efficiently than similar algorithms. This is particularly true in dynamic environments, where CQL is shown to outperform other reinforcement learning algorithms. Unlike CQL, alternative methods for achieving goal-independent navigation, such as coordinate learning, cannot easily be applied to complex environinents. Furthermore, the performance of CQL shows that coordinate learning is not necessary to solve behavioural tasks previously thought to require an abstract vector representation. While the focus of this research has been on spatial cognition, the hippocampus is also thought to be fundamental to other basic thought processes. Therefore, it is hoped that this research may stimulate further study not only into animal and robotic navigation, but also into biological and artificial intelligence in general.

History

Publication status

  • Unpublished

Rights statement

Copyright 2007 the author - The University is continuing to endeavour to trace the copyright owner(s) and in the meantime this item has been reproduced here in good faith. We would be pleased to hear from the copyright owner(s). Thesis (PhD)--University of Tasmania, 2007. Includes bibliographical references

Repository Status

  • Open

Usage metrics

    Thesis collection

    Categories

    No categories selected

    Exports

    RefWorks
    BibTeX
    Ref. manager
    Endnote
    DataCite
    NLM
    DC