Towards a Context-Aware and Pervasive Multimodality

M. D. Hina, C. Tadj, A. Ramdane-Chérif, N. Levy

Pervasive multimodality aims to realize anytime, anywhere computing using various modes of human-machine interaction, supported by various media devices other than the traditional keyboard-mouse-screen used for data input and output. To achieve utmost efficiency, the modalities for human-machine interaction must be selected based on their suitability to a given interaction context (i.e. combined user, environment and system contexts) and the availability of supporting media devices. To be fault-tolerant, the system should be capable of finding replacement to a failed media device. This paper presents a paradigm of such computing system. Proposed solutions to some technical challenges including the establishment of relationships among interaction context, modalities and media devices, and of the mechanism for the system’s incremental acquisition of knowledge on various interaction contexts and their corresponding selections of suitable modalities are presented in this work.