Multimodal Fission for Interaction Architecture

A. Zaguia, A. Wehbi, C. Tadj, A. Ramdane-Chérif

Since the eighties, the rapid development in the world of information technology has made possible to create systems that interfere with the user in a harmonious manner. This is due to the emergence of a technology known as multimodal interaction. This technology allows the user to use natural modalities (speech, gesture, eye gaze, etc.) to interact with the machine in a richer computing environment. These are called multimodal systems. These systems represent a remarkable deviation from using conventional systems, such as windows-icons, to a human-machine interaction, providing to the user more naturalness, flexibility and portability. Generally, these systems integrate multimodal interface in input and output. Via the output interface, the system should be able to choose among the available modalities, those that meet the best environmental constraints. It should also be able to interpret a complex command and divide it into elementary sub-tasks and present them in the output modalities: it is called multimodal fission. Our work specifies and develops fission components for a multimodal interaction and presents an effective fission algorithm using patterns, when various output modalities (audio, display, Braille, etc.) are available to the user.

Keywords: multimodal fission, pattern, human-computer interaction.