Stannus_whole_thesis_ex_pub_mat.pdf (5.05 MB)
Natural gestural interaction for geovisualisation
thesisposted on 2023-05-27, 09:55 authored by Stannus, SF
As the power of computers has increased, the bottleneck for many tasks have shifted from the capability of computers to process information to the ability of humans to absorb this information and relay their input back into the computer accordingly - a cycle largely dependent on the interface through which users interact with the machine. The mouse and keyboard, which were designed for the computing tasks of decades ago, are still the dominant interface today, despite recent advances in natural interaction technologies and an ever-widening range of applications. Geovisualisations are used for many applications, ranging from casual use by the general public to sophisticated research into complex phenomena. Being multidimensional and potentially complex, GIS datasets are the ideal candidates for applying new natural approaches to interaction. Gesture has long been associated with natural interaction, but existing research has failed to surpass the traditional desktop interaction paradigm in usage. The aim of this work is to establish whether or not a gestural approach built on wellconsidered theory and refined by experimentation can provide a practical improvement to geovisual interaction. It also sets out to document any discoveries regarding requirements for both interaction design and implementation made in this endeavour. The thesis begins with an introduction to geovisualisation outlining the types of data used and the growing areas to which it is applied, especially those of a 3D nature. Efficiency, learnability and comfort are identified as the key targets for improvement and it is argued that the ideal approach would be a bimanual interface following the principles of direct and simultaneously integral interaction. An initial unencumbered prototype implemented using computer vision is presented along with the structure and results of a comparative user study in which this prototype was tested against existing devices for navigation in Google Earth. This pilot study uncovered outstanding technical roadblocks and yielded invaluable qualitative feedback. Based on these insights, the case is made for AeroSpace ‚ÄövÑvÆ a technique built around a novel metaphor for spatial manipulation and navigation with the full seven degrees of freedom. AeroSpace extends the two-point method of transformation commonly used with touchscreen devices to control navigation in immersive 3D spatial environments, using the 3D position and direction of the user's index fingers together with simple pinch and point hand poses. This approach was implemented using custom gloves in a test-bed built around NASA's World Wind virtual globe software. A description is given of a comparative user study involving tasks related to a high-resolution landscape model. Results from this study are presented that clearly show that users complete tasks that combine navigation with marking areas and data-points significantly faster when using AeroSpace compared to a popular commercial device specifically designed for 3D spatial interaction. The users also rated it as the significantly more natural and comfortable of the two approaches. Altogether, this work presents evidence for a number of conclusions. It shows that the benefits of encumbered gestural interaction will continue to outweigh the disadvantages for the foreseeable future. It also presents qualitative and quantitative evidence to show that future gestural interaction schemes should follow the principles of direct bimanual and simultaneously integral interaction. However, it also demonstrates that users do not always expect natural metaphors and that physical navigation in particular is underutilised by new users. This work also opens up new areas of potential future research. One priority is to identify ways of increasing the utilisation of virtual navigation and simultaneous 7DoF navigation. Also, the role of the level of directness in the success of AeroSpace remains not entirely evident. Immersive head-mounted displays would allow this and approaches to collaboration in 7DoF space to be suitably tested.