Modeling ontology for multimodal interaction in ubiquitous computing systems

A. Wehbi, A. R.-Cherif, C. Tadj

People communicate with each other using different ways, such as words, gestures, etc. to give information about their status, emotions and intentions. But how can this information be described in a way that autonomous systems (e.g. Robots) may react with a human being in a given environment?

A multimodal interface allows a more flexible and natural interaction between a user and a computing system. In such interactive system, fusion engines are the fundamental components that interpret input events whose meaning can vary according to a given context. This paper presents a methodological approach for designing an architecture that facilitates the work of a fusion engine. The selection of modalities and the fusion of events invoked by the fusion engine are based upon the definition of an ontology that describes the environment where a multimodal interaction system exists. The techniques used to achieve these features are discussed in this paper.