A. Zaguia, C. Tadj, A. Ramdane-Chérif
In our days, the technology allows
us to produce extended and totally human-controlled multimodal systems.
These systems are equipped with multimodal interfaces allowing a more
natural and more efficient interaction between man and machine. End
users could take advantage of natural modalities (e.g. eye gaze,
speech, gesture, etc.) to communicate or exchange information with
applications. In this work, we assume that various output modalities
(audio, screen, eye gaze, etc.) are available to the user. In this
paper, we present the prototyping of a multimodal architecture. We show
particularly how modalities selection and fission algorithms are
implemented in such system. We used pattern technique to subdivide a
complex command to elementary subtasks and to select adequate
modalities for every elementary subtask. We integrate a context-based
method using Bayesian Network to overcome the ambiguous or uncertain
Bayesian Network, Pattern, User Interface, Multimodal Fission.