Information Access in a Multimodal Multimedia Computing System for Mobile Visually-Impaired Users

A. Awdé, M. D. Hina, C. Tadj, A. R. Cherif, Y. Bellik

A multimodal interface allows computing with input and output that best suits the needs of a user, including those with disability. In our multimodal multimedia (MM) computing system for visually-impaired users, the selection of media, modalities and types of applications for activation depends on user’s context and application data. The adaptation of a computing system to the needs of a mobile user is essential in order that the user could continue working on his task at anytime and anywhere, thereby increasing his productivity. Our system is adaptive because the user could access his information anytime and anywhere. This access to user information is made possible through wired and wireless networks. The user profile, user task and data, and the system’s knowledge database (KD) do “follow” the user wherever he goes, hence the system adapts accordingly based on the user’s condition. Our system detects user context and the user’s application data, consults the KD and selects the appropriate media, modalities and types of applications for activation. This work is an original contribution to the ongoing research in helping the visually-impaired users to become autonomous in using the computing system. Our aim is to improve the computing productivity of a visually-impaired user.