posted on 2023-05-23, 08:38authored byKahl, G, Wasinger, R, Schwartz, T, Spassova, L
In everyday life, it is useful for mobile devices like cell phones and PDAs to have an understanding of their user’s surrounding context. Presentation output planning is one area where such context can be used to optimally adapt information to a user’s current situational context. This paper outlines the architecture of a context-aware output planning module, as well as the design and implementation of three output generation strategies: user-defined, symmetric multi- modal, and context-based output planning. These strategies are responsible for selecting the best suited modalities (e.g. speech, gesture, text), for presenting information to a user situated in a public environment such as a shopping mall. A central point of this paper is the identification of context factors relevant to presentation planning on mobile devices with finite resources to obtain a private and/or public output. We show via a working demonstrator the extent to which such factors can, with readily available technology, be incorporated into a system. The paper also outlines the set of reactions that a system might take when given context information on the user and the environment.
History
Publication title
Proc. of the AISB Symposium on Multimodal Output Generation (MOG)
Editors
A Bangerter et al
Pagination
46-49
Department/School
School of Information and Communication Technology
Publisher
SIGMedia
Place of publication
UK
Event title
The AISB Symposium on Multimodal Output Generation (MOG)
Event Venue
Aberdeen, Scotland, UK
Date of Event (Start Date)
2008-04-03
Date of Event (End Date)
2008-04-04
Rights statement
Copyright 2008 the Authors & The Society for the Study of Artificial Intelligence and the Simulation of Behaviour (AISB)