Multimodal Fusion for Interaction Systems

A. Wehbi, A. Zaguia, A. Ramdane-Chérif, C. Tadj

Researchers in computer science and computer engineering devote now a significant part of their efforts in communication and interaction between human and machine. Indeed, with the advent of real-time multimodal and multimedia processing, computer is no longer seen only as a calculation tool, but as a machine of communication processing, a machine that accompanies, assists or promotes many activities in daily life. A multimodal interface allows a more flexible and natural interaction between a user and a computing system. It extends the capabilities of this system to better match the natural communication means of human beings.  In such interactive system, fusion engines are the fundamental components that interpret input events whose meaning can vary according to a given context. Fusion of events from various communication sources, such as speech, pen, text, gesture, etc. allow the richness of human-machine interaction

This research will allow a better understanding of the multimodal fusion and interaction, by the construction of a fusion engine using technologies of semantic web domain. The aim is to develop an expert fusion system for multimodal human-machine interaction that will lead to design a monitoring tool for normal persons, seniors and handicaps to ensure their support, at home or outside.

Keywords: fusion, multimodal interaction, ontology, modalities, OWL API, fusion models.