Using Multimodal Fusion in Accessing Web Services

A. Zaguia, M. Hina, C. Tadj, A. Ramdane-Chérif

In our days, the technology allows us to produce extended and totally human-controlled multimodal systems. These systems are equipped with multimodal interfaces allowing a more natural and more efficient interaction between man and machine. End users could take advantage of natural modalities (e.g. eye gaze, speech, gesture, etc.) to communicate or exchange information with applications. The use of multimodal applications in web services, integrated with natural modalities, is an effective solution for users who cannot use a keyboard or a mouse, on users who have visual handicap, on mobile users equipped with wireless telephone/mobile devices, on weakened users, etc. Our work presents an approach in which various input modalities (speech, touch screen, keyboard, eye gaze, etc.) are available at user’s disposition in order to access web services. While current state-of-the-art uses two (on rare cases, three) pre-defined modalities, our approach allows an unlimited number of concurrent modalities using semantic level called “multimodal fusion”. Such approach gives user the flexibility to use the modalities as he sees fit for his situation. The detailed description of the proposed approach as well as the application that has been developed that uses these multimodalities is presented in this paper.