Communication is a very important aspect of human life; it is with communication that helps human beings to connect with each other as individuals and as independent groups. Through communication, man is able to advance human developments in all fields. In informatics, the very purpose of the existence of computer is information dissemination – to be able to send and receive information. Humans are quite successful in conveying ideas with one another, and reacting appropriately. This is due to the fact that we share the richness of the language we share, have a common understanding of how things work and an implicit understanding of everyday situations. When human communicate with human, they comprehend the information that is apparent to the current situation, or context, hence increasing the conversational bandwidth. This ability to convey ideas, however, does not transfer when human interacts with computer. On its own, computers do not understand our language, do not understand how the world works and cannot sense information about the current situation. In a typical impoverished computing set-up where the mechanism used to provide computer with information is through the use of mouse, keyboard and screen, the end result is we explicitly provide information to computers, producing an effect that is contrary to the promise of transparency and calm technology in Marc Weiser’s vision of ubiquitous computing (Weiser 1991; Weiser 1993; Weiser and Brown 1996). To reverse this trend, it is imperative that ways and methodologies are developed, enabling computers to have access to context. It is through context-awareness that we can increase the richness of communication in human-computer interaction, through which we can reap the most likely benefit of more useful computational services.
Context (Dey and Abowd 1999; Gwizdka 2000; Dey 2001; Coutaz, Crowley et al. 2005) is a subjective idea and its interpretation varies from one’s personal point of view to another. Context evolves and the methodology for acquisition of contextual information is essential. However, the one has the final word as to whether the envisioned context is correctly captured /acquired or not is the end user. Current research works indicate that some contextual information are already predefined by their systems from the very beginning – this is correct if the application domain is fixed but is incorrect if we infer that a typical user does different computing tasks in different occasions. With the aim of coming up with more conclusive and inclusive design, we conjure that what contextual information should be left to the judgment of the end user who has the authority to determine which information is important to him and which is not. This leads us to the incremental acquisition of context where context parameters are added, modified or deleted one context parameter at a time.
In conjunction with the idea of inclusive context, the notion of context is enlarged that it has become context of interaction. Interaction context refers to the collective context of the user (i.e. user context), of his working environment (i.e. environment context) and of his computing system (i.e. system context). Logically and mathematically, each of these interaction context elements – user context, environment context and system context – is composed of various parameters that describe the state of the user, of his workplace and his computing resources as he undertakes an activity in accomplishing his computing task, and each of these parameters may evolve over time. For example, user location is a user context parameter and its value will evolve as the user moves from one place to another. The same can be said about noise level as an environment context parameter; its value evolves over time. Ditto with available bandwidth that continuously evolve which we consider as a system context parameter. To realize the incremental definition of incremental context, a tool called layered virtual machine for incremental interaction context is developed. This tool can be used to add, modify and delete a context parameter on one hand and determine the sensor-based context (i.e. context that is based on parameters whose values are obtained from raw data supplied by sensors) on the other.
To obtain the full benefit of the richness of interaction context with regards to communication in human-machine interaction, the modality of interaction should not be limited to the traditional use of mouse-keyboard-screen alone. Multimodality (Dong, Xiao et al. 2000; Oviatt 2002; Ringland and Scahill 2003) allows for a much wider range of modes and forms of communication, selected and adapted to suit the given user’s context of interaction, by which the end user can transmit data with computer and computer responding or yielding results to the user’s queries. In multimodal communication, the weaknesses of one mode of interaction, with regards to its suitability to a given situation, is compensated by replacing it with another mode of communication that is more suitable to the situation. For example, when the environment becomes disturbingly noisy, using voice may not be the ideal mode to input data; instead, the user may opt for transmitting text or visual information. Multimodality also promotes inclusive informatics as those with permanent or temporary disability are given the opportunity to use and benefit from information technology advancement. With mobile computing within our midst coupled with wireless communication that allows access to information and services, pervasive and adaptive multimodality is more than ever apt to enrich communication in human-computer interaction and in providing the most suitable modes for data input and output in relation to the evolving context of interaction.
A look back at recent research works inform us that a great amount of efforts were exerted and expended in finding the definition of context, in the acquisition of context, in the dissemination of context and the exploitation of context within a system that has a fixed domain of application (e.g. healthcare (Varshney 2003; Tadj, Hina et al. 2006), education (Garlan, Siewiorek et al. 2002), etc.). Also, another close look tells us that much research efforts on ubiquitous computing were devoted on various application domains (e.g. identifying the user whereabouts, identifying services and tools, etc.) but there is barely, if ever, an effort made to make multimodality pervasive and accessible to various user situations. In this regard, this work fills the gap. This chapter – a multi-agent based multimodal system adaptive to the user’s interaction context – is a multi-agent system design that exhibits adaptability to a much larger context called interaction context. It is intelligent and pervasive, meaning it is functional even when the end user is stationary or on the go. It is conceived for a specific purpose. Given an instance of interaction context, one which evolves over time, this system determines the optimal modalities that suit such interaction context. By optimal, it means a trade-off of selection’s decision on appropriate multimodality based on the given interaction context, available media devices that support the modalities and user preferences. This mechanism employs machine learning and uses case-based reasoning with supervised learning (Kolodner 1993; Lajmi, Ghedira et al. 2007). An input to this decision-making component is an instance of interaction context and its output is the most optimal modality and its associated media devices that are for activation. This mechanism is continuously monitoring the user’s context of interaction and on behalf of the user continuously adapts accordingly. This adaptation is through dynamic reconfiguration (Poladian 2004; Hina, Ramdane-Cherif et al. 2005) of its software architecture (Garlan and Perry 1994; Clements, Kazman et al. 2002; Clements, Garlan et al. 2003).
The work in this chapter is different from the rest of previous works in the sense that while others capture, disseminate and consume context to suit its preferred domain of application, this new multi-agent system captures the context of interaction and reconfigure its architecture dynamically in generic fashion in order that the user would continue working on his task anytime, anywhere he wishes regardless of the application domain the user wishes to undertake. In effect, the multi-agent system that is presented in this chapter along with all of its mechanisms, being generally generic in design, can be adapted or integrated with ease or with very little amount of modification into various computing systems of various domains of applications. This is the authors’ contribution to the domain.
Keywords: Human-machine interface, multi-agent system, pervasive computing, multimodal multimedia computing, software architecture.
"Centre for Pervasive Healthcare, http://www.pervasivehealthcare.dk/."
Alpaydin, E. (2004). Introduction to Machine Learning. Cambridge, Massachusetts, MIT Press.
Clements, P., Garlan, D., et al. (2003). Documenting software architectures: Views and beyond, Portland, OR, United States, Institute of Electrical and Electronics Engineers Computer Society.
Clements, P., Kazman, R., et al. (2002). Evaluating Software Architecture.
Coutaz, J., Crowley, J. L., et al. (2005). "Context is key." Communications of the ACM 48(3): pp. 49-53.
Dey, A. K. (2001). "Understanding and Using Context " Springer Personal and Ubiquitous Computing 5(1): pp. 4 - 7.
Dey, A. K. and Abowd, G. D. (1999). Towards a Better Understanding of Context and Context-Awareness. 1st Intl. Conference on Handheld and Ubiquitous Computing, Karlsruhe, Germany, Springer-Verlag, LNCS 1707.
Dong, S. H., Xiao, B., et al. (2000). "Multimodal user interface for internet." Jisuanji Xuebao/Chinese Journal of Computers 23(12): pp. 1270-1275.
Garlan, D. and Perry, D. (1994). Software architecture: practice, potential, and pitfalls, Sorrento, Italy, Publ by IEEE, Los Alamitos, CA, USA.
Garlan, D., Siewiorek, D., et al. (2002). "Project Aura: Towards Distraction-Free Pervasive Computing." IEEE Pervasive Computing, Special Issue on Integrated Pervasive Computing Environments 21(2): pp. 22 - 31.
Gwizdka, J. (2000). What's in the Context? Workshop on Context-Awareness, CHI 2000, Conference on Human factors in computing systems, The Hague, Netherlands.
Hina, M. D., Ramdane-Cherif, A., et al. (2005). A Ubiquitous Context-sensitive Multimodal Multimedia Computing System and Its Machine Learning-based Reconfiguration at the Architectural Level.
Hina, M. D., Tadj, C., et al. (2006). Machine Learning-Assisted Device Selection in a Context-Sensitive Ubiquitous Multimodal Multimedia Computing System. IEEE ISIE 2006, International Symposium on Industrial Electronics, Montreal, Quebec, Canada.
Kolodner, J. (1993). Case-Based Reasoning. San Mateo, CA, Morgan Kaufman.
Lajmi, S., Ghedira, C., et al. (2007). "Une méthode d’apprentissage pour la composition de services web." L’Objet 8(2): pp. 1 - 4.
Mitchell, T. (1997). Machine Learning, McGraw-Hill.
Oviatt, S. (2002). Multimodal Interfaces: Handbook of Human-Computer Interaction. New Jersey, USA, Lawrence Erbaum.
Poladian, V. S., J.P.; Garlan, D.; Shaw, M.; (2004). Dynamic configuration of resource-aware services. ICSE 2004, 26th International Conference on Software Engineering.
Ringland, S. P. A. and Scahill, F. J. (2003). "Multimodality - The future of the wireless user interface." BT Technology Journal 21(3): pp. 181-191.
Tadj, C., Hina, M. D., et al. (2006). The LATIS Pervasive Patient Subsystem: Towards a Pervasive Healthcare System. ISCIT 2006, IEEE International Symposium on Communications and Information Technology, Bangkok, Thailand.
Varshney, U. (2003). "Pervasive Healthcare." Computer 36(12): pp. 138 - 140.
Weiser, M. (1991). "The computer for the twenty-first century." Scientific American 265(3): pp. 94 - 104.
Weiser, M. (1993). "Some computer science issues in ubiquitous computing." Communications of the ACM 36(7): pp. 74-84.
Weiser, M. and Brown, J. S. (1996). "Designing Calm Technology." Powergrid Journal 1(1): pp. 94 - 110.