/mw/














A&MI home
Archives & Museum Informatics
158 Lee Avenue
Toronto Ontario
M4E 2P3 Canada

ph: +1 416-691-2516
fx: +1 416-352-6025

info @ archimuse.com
www.archimuse.com

Search Search
A&MI

Join our Mailing List.
Privacy.

 

published: March 2004
analytic scripts updated:
October 28, 2010

Creative Commons Attribution-Noncommercial-No Derivative Works 3.0  License
Museums and the Web 2003 Papers

 

Focus your young visitors: Kids Innovation Fundamental changes in digital edutainment

Sebastian Sauer, ion2s buero fuer interaktion, and Stefan Göbel, ZGDV e.V. Digital Storytelling, Germany

Abstract

With regard to the acceptance of human-computer interfaces, immersion represents one of the most important methods for attracting young visitors into museum exhibitions. Exciting and diversely presented content as well as intuitive, natural and human-like interfaces are indispensable to bind users to an interactive system with real and digital parts.

In order to overcome the obstacles to attracting the young, we have taken an interdisciplinary approach to the fields of multimedia, media design, TV/movie or literature, and especially myths, to develop narrative edutainment applications for kids. Similar to Hollywood, production content and dramaturgic scripts are generated by authoring tools and transmitted by multimodal interfaces enhanced by natural conversation forms. These concepts are enhanced by interaction design principles, methods and appliances; such as Kids Innovation or Momuna.

This paper describes both existing technologies and new innovative methods and concepts of interactive digital storytelling and user interaction design to establish immersive edutainment applications and appliances using multimodal user interfaces. These approaches merging into Dino-Hunter as innovative game-oriented mobile edutainment appliances are discussed within the environment of mobile application scenarios for museums and their young visitors.

Keywords: Kids Innovation, Momuna, Interactive Digital Storytelling, Dino-Hunter, Telebuddy, GEIST, Experience Appliances, Transforming User Interface, Multimodal Interfaces, Digital Information Booth, Edutainment Applications.

Introduction

The global aims and tasks of museums are not limited to the archival storage and conservation of cultural and scientific exhibits and artifacts. Rather, the transmission of knowledge, the imprint of cultural identity and the experience of art represent special issues to museums and involved parties in the museum scenario. In order to be an active part of cultural life, it is necessary to compete with other actual offers and trend-setting developments in the current information society and to influence these developments. This paper describes practical examples, concepts and methods of innovative digital and interactive edutainment applications enabling especially young visitors to dive into a game-oriented experience and learning environment within museums. Examples for this are integrated multimedia systems of experience appliances consisting of avatar-based information kiosks, mixed reality environments, interactive toy characters and mobile information devices, all based on alternative trend-setting user interfaces providing multimodal and adventure-driven access to the contents of museums. The common scenario and introduced new concept of Kids Innovation is modularly structured and addresses exactly these issues mentioned above. Altogether, Kids Innovation focuses on young people and provides mechanisms and methodologies for fundamental changes in digital edutainment applications. Thus, new ways of interaction and communication between kids and artifacts, between visitors and the museum, or between kids and visitors are introduced.

From the technical point of view, Kids Innovation is enhanced by different principles and mechanisms of interactive digital storytelling and user-centered interaction design methods. Interactive storytelling techniques enable museum institutions to provide exciting stories and interesting presentation modes "beyond the desktop", enabling (young) visitors to dive into a story and get detailed information about artifacts using game-based interfaces. These game-based interfaces are integrated within a mixed reality environment consisting of physical interfaces/props and devices such as sensors, video-tracking, speech recognition or scanners, as well as virtual components such as a 3D-environment with narrative characters talking to the visitors, and multimedia presentations. Subsequently, interactive storytelling techniques improve the usability of digital edutainment applications and enable the creation of exiting, suspenseful and immersive interfaces.

From the commercial and marketing-oriented point of view, the use of experience appliances developed by the principles of Digital Storytelling and of Kids Innovation enables museum institutions to get valuable feedback concerning the impact and effect of their own exhibition concepts and the behavior of (young) visitors. Based on these facts and additional case studies, museum institutions are enabled to support schools, classes or any other kind of visitor group within the three major phases of visiting a museum: preparation, execution and post-processing.

State-of-the Art

User Centred Interaction Design

New high-end technology in the field of computers, multimedia systems and telecommunication arises every day and promises a better and easier way to manage human beings' lives. The advantages of such technologies are their nearly never-ending funcitionalities and possibilities. But the crux is, that their complexity is mainly in contrast to intuitive access of their functions. The human being has needs and wants to solve problems without being stressed by having to understand extensive technologies. To let the human being become the centre of the considerations during the development process of technological innovations is the goal of User Centred Interaction Design. Scientists, product and communication designers, computer engineers, psychologists and anthropologists combine their competences to make things usable.

Recently, most multimedia systems exclusively have worked with the combination of haptic hardware (solid-) and traditional graphical user interfaces. You find such user interaction components and principles combined in standard Desktop PCs, Personal Digital Assistants, TabletPCs and thousands of software applications. Sometimes auditory components are integrated, but in a very reduced way. A combined solid-/audio- or audio-only interface is associated with devices which are normally developed for physical handicapped people (for example, the blind) or which are used in a special context, where visual attention cannot be paid to the device; for example, while driving a car.

Nowadays users get off of their office desktop and dive into a mobile situation. That fact is reflected in new industry interactive and digital products for mobile usage, such as smartphones, Pocket PCs or CarPCs. This is an interesting and important point for the User Interface Developer. The mobile situation and the permanent changes of user surroundings cannot be clearly defined as in a more constant environment; for example, at the desktop in the office. The knowledge of how the five senses of the human being work together and how their reactions to different impressions influence the interaction between the user and a device is more important then ever. This point is very interesting within the museum context because of the combination of real artefacts in interaction with the digital appliances.

User Interface combinations and principles show that an assimilation of information can be done parallel by our senses. But every perception channel can support only one orientation of our attention. Jef Raskin [13] picked up on this problem and introduces the "locus of attention" in combination with the term "singularity" [10]. If the User is involved in one linear thinking process, can an unforeseen event deflect attention in a new way? The decision is that no second locus of attention arises, but the old one breaks down and the new one takes priority.

This fact is very interesting for User Interaction Specialists. It influences especially the design process of human device interfaces for mobile multimedia systems because of the constantly changing environment of the user in the mobile situation. The conclusion is that multimodal Interfaces can provide different access to information and allow different communication modes, but you have to understand the temporary context of the user, his physical and emotional possibilities, the variable environment and the dynamic situation. That means that a static multimodal user interface which interacts with a user by constant modes ignores the changes and cannot be so effective as it could be. (For more on this, see "new concepts…").

Digital Storytelling

From the content-related point of view, Digital Storytelling represents a new research discipline contributing to information technology by applying aspects of storytelling and entertainment to digital applications. New authoring methods for multimedia design and application development are currently being explored and implemented by an interdisciplinary team of communication designers and computer scientists. The goal of digital storytelling is to achieve new types of applications that are easily understood by everyone through telling a story [16]. Here, additionally, those new forms of multimodal interaction are integrated. For integrated authoring, three aspects of interactive multimedia play roles in a scenario design description:

  • Graphic-interactive media: These are 2D or 3D graphical elements (including spatial sound sources) which can be placed, designed, and depicted in various displays, as well as manipulated by navigation and pointing techniques (an area already represented by Screen Designers and 3D Designers for virtual reality).
  • Continous media: These include all kinds of time dependent media, such as audio/video streams, animation, and speech. Design criteria are not questions of spatial layout, but of sequencing and timing as well as turn-taking in a conversation. They determine the dramaturgy of an application to a high degree.
  • Physical Media & Modalities: Selection and design of physical-spatial worlds of adequate input and output devices, considering new requirements such as mobility and ubiquitous computing beyond pure screen design. Ergonomic and haptic qualities determine usability, acceptance, and aesthetics.

Designers must integrate, design, and evaluate all of these elements during the concept phase of a production. Ultimately, it is the end-user who will evaluate the overall impression of these integrated aspects and determine the application's usability and value.

To demonstrate the potential of this integrated design, ZGDV Darmstadt has developed a kiosk system for Siggraph 1999 that gives visitors the impression of stopping at a virtual trade show booth where they can get information by having a conversation with a human-like assistant in the form of an avatar [2], a more or less "intelligent" being, a "simple digital assistant"; or by participating with "real intelligence with knowledge and emotions" as an autonomous virtual character [14] [8] . This system includes natural interaction elements such as tracking visitors' poses and positions, and the positions of physical objects (e.g. real books), to establish a context for the conversation.

Figure 1: Digital Information Booth - Project "Info zum Anfassen" (tangible information).

Based on that scenario, further innovative ideas and concepts are indicated by the IZA ("Info zum Anfassen", engl.: tangible information) project [1] , which aims to develop a mixed reality platform for a booth scenario in cooperation with the visual computing department. Visitors are recognized by sensors and are saluted at the booth by avatars. Corresponding to the application scenario and conversation objectives, a talk is initiated. Here, instead of typical hypermedia, mouse- or menu-oriented interfaces, users can activate sensors or can physically select flyers or can show something, somewhere (gesture recognition) in order to select presentation modes or control the workflow of the story. The story engine is generic in the sense of different application scenarios (sales talk, infotainment, conversation, etc.), provides frameworks for various scenarios, and handles the device and media management. Grasbon and Braun describe a generic morphological approach to interactive storytelling in mixed reality environments [5] providing a non-linear interactive storytelling engine [3] based on a rule-based system implemented in JESS [4].

Figure 2 shows different multimodal interfaces integrated into the digital information booth of the IZA project. Exemplarily, microphones are used for speech recognition, a video camera is used for video recognition or a scanner for business cards is used to get in contact with the user and to get his email-address in order to provide personalized information.

From the scientific-related point of view, there are several objects of research in Digital Storytelling which lead to new technology for multimedia applications: With new forms of scenario-based design, communication designers shall be included in an interdisciplinary design process for complex applications beyond screen design. Storytelling methods shall be involved to create drafts, design artefacts, and prototypes in a pre-production phase. To include storytelling aspects such as dramaturgy, character, and emotions in an interface, new forms of Conversational Interfaces are proposed. Further, the vision of writing as programming leads to new High-level APIs that allow application programming in a storytelling way.

Figure 2: Digital Information Booth - multimodal interfaces

A current technological trend is the development of intelligent assistance systems. In contrast to a tool that must support direct manipulation and should be handy, an assistant is expected to solve tasks autonomously, in parallel with other tools operated by the user. Proactivity is another criteria: in a museum, a trade show or other kiosk systems, a virtual being as a partner can facilitate orientation in artificial worlds by playing an active role in prompting user interaction. This interaction resembles interpersonal communication much more than interactions using traditional interactive tools. For this reason, currently anthropomorphous user-interface agents (avatars) within interactive (3D) worlds [11] are developed. Avatars are able to communicate by using human-like interaction techniques, such as lip-sync speech, facial expressions, and gestures. A conversational user interface system should be proactive, narrative, and emotional, and should represent a certain role metaphor. According to the role, not only is the avatar's behaviour and importance important, but also its location and timing.

Application Examples

Apart from digital information booths orinformation kiosks, there are other possible application areas using the same technology of user interaction design, interactive digital storytelling and multimodal user interfaces. For example, within the GEIST project a story engine based on the Propp model [12] is conceptualised as the control unit for dramaturgy [15][6], interactive workflow and further personalized and application-driven parameters in a mobile computer game. From the educational and learning point of view, the main goal of GEIST is to transmit historical information providing mixed reality technology. Users/pupils walk around at historical sites in Heidelberg and get information by virtual avatars presented on head-mounted displays.

In the context of EMBASSI and MAP, users are provided with animated avatars as assistants integrated into entertainment electronics, e.g. presented on a TV screen. Examples for this exist in both multimedia home-entertainment and business applications. In order to improve acceptance of the user interface and to increase immersion, these applications are enhanced by human-like conversation rules for the interaction with user interface agents.

Figure 3: TELEBUDDY - physical avatar visiting the EXPO exhibition 2000 in Hannover, Germany.

TELEBUDDY represents a physical avatar equipped with different sensors respectively sense organs (see, listen, speak, show) enabling users to contact the Telebuddy via the internet and communicate via different channels for the various senses. For example, the Telebuddy can visit an exhibition and different users (user groups) can look through the eyes of the Telebuddy or speak to other visitors of the exhibition using the Telebuddy speech interface.

With the art-E-fact project (art-E-fact: A Generic Platform for the Creation of Interactive Art Experience in Mixed Reality), an interdisciplinary team of computer scientists, media designers, artists, art historians, restorers and media publishers is reaching for a new stage of convergence of arts and technology. The project will create a platform to develop and explore new forms of creation, presentation and interaction with various kinds of artistic expression. Digital storytelling and Mixed Reality technologies enable new dimensions for artistic expression, and therefore will build the foundation for the new artistic applications.

Therefore, the main objective of the art-E-fact project is to develop a generic platform for interactive storytelling in Mixed Reality to allow artists to create artistic expressions in an original way, within a cultural context between the virtual ("new") and the physical ("traditional") reality. The generic platform for artistic creation in mixed reality is based on a kernel that combines a virtual reality system with a scenario manager for interactive conversations with partially autonomous virtual characters. Further, components for media management and abstract device management enable flexibility and allow authors to design multimodal interactions on various abstraction levels. Results are positive:

  • Providing interactive storytelling dialogue structures; instead of "navigation" metaphors in hypertext structures.
  • Enabling the design of holistic spectator experiences by integration of design issues concerning content, story, characters, their modalities and used hardware; instead of being constrained to mere screen design with a fixed interaction metaphor.

In summary, it is possible for artists to include anthropomorphic interactions such as speech, gestures, eye movement, and body poses into their design of mixed realities, and to direct lifelike avatars in order to act.

Figure 4: art-E-fact - generic platform

Art-E-fact is a generic platform for the creation of interactive art experiences in mixed reality. Sketched is an installation concept of visitors having conversations with virtual philosophers on the screen (left), with various options for physical props to interact (right).

In the following sections of this paper, the authors describe new methods and concepts to adopt these basic technologies of interactive storytelling and user interaction design and to transmit them into a mobile scenario for museums as one representative of the wide range of edutainment applications. Here, game-oriented concepts are integrated to enhance the usability of developed concepts, respectively user acceptance, especially in the context of pupils and young visitors to museums. The difference between interactive storytelling and gaming [17] is that while both concepts try to attract attendees to walk through a story, within computer games users/players don't have any guarantee of solving the problem, which means it is possible to lose a game.You are always successful within a story. Another difference between story-telling and gaming is that digital story-telling always uses a dramatic concept . Braun,[7], Schneider [1] and Müller et al. [18] describe the usage of conversational user interfaces within these gaming vs. narrative storytelling environments. However currently there is a trend to merge both disciplines and to use storytelling metaphors within narrative environments in the context of cyberspace for computer games [9].

New methods and concepts

The Transforming User Interface

The concept of the Transforming User Interface, developed in 2001 by ion2s, tries to solve the problems of the dynamic changes of the mobile situation, context and environment around the user and his own priorities and needs. A digital device, the Mobile Companion, which is in use in four main areas (@home, @office, @car, stand-alone) interacts with the user across the visual, the auditive, and the haptical mode by sounds, images, speech, gestures etc. It transforms its user interface components when it is used in a new area or when the boundary conditions are changing. In the periphery the Mobile Companion and its User Interface components will be prepared for its next tasks.

Figure 5: The 4 main areas (incl. periphery) for the Mobile Companion

When it is used in one special field, for example at the office, the interaction principle is optimized for work at a desktop. Priority has the Graphical User Interface. When the Mobile Companion is integrated in the car, becoming a driving assistant system, the locus of the user's attention has to be on the driving process. It is difficult to communicate with the Mobile Companion by the Visual Mode. So the Graphical User Interface must be transformed to an optimized impulse-based interface and the auditive and gesture-based user interfaces become more important. For example, the result of a simple study shows that the changes in the driver's stress influence the Interface transformation. If stress is high, the auditive Iiterfaces or gestures have priority, visual Interaction is very reduced and information concentration must be low. Higher density of content and the interaction per Graphical User Interface is possible in a low stress driving situation. Evaluation and studies of situations, environments, different user interface components and user contexts brought up a need for control equipment to govern the transformations .

Figure 6: Sketches of example transformations for Smartphones and PDAs

These principles and the main concept of the Transforming User Interface can be transferred to every Digital Appliance offering multimodal access and communication methods, and is used in variable and changing environments such as in exhibitions and museums. The present technology enhancements, for example tracking systems, offer the possibilities to understand the context and to focus the human being. This point is very important for building up experience environments for edutainment appliances. Focussing the user and getting him in conversation with Applications, Devices and other people enables User Interface Developers to generate innovative digital and interactive edutainment presentations, especially for kids.

Kids Innovation

Kids Innovation is a unit and a method of ion2s to develop user-centred interface concepts for the young target group. In cooperation with specialists for technical questions, storytelling, learning methods, education, pedagogy, etc. new concepts, applications and digital appliances are designed for kids in schools, in museums, in their private areas, etc. From the beginning the young user is completely involved as a full and most important member of the interdisciplinary project team. During the project the kids are organized in workshops for activities and evaluations to deliver useful information for the concepts, about the "needs" and "wants", and regarding demographical and statistical facts of their user group. The transfer of know-how and knowledge lets kids bring up new ways of thinking and is one synergetic effect. Kids Innovation projects help the clients and partners to evaluate their "user kids" during the development process and in the continuous usage of the digital application.

Momuna

A special edutainment concept, to integrate digital experience appliances in a museum, was based on Kids Innovation. The Momuna-concept (mobile museum navigator) is targeted for museums or theme parks, their educational departments and enrichment school programs. It combines simple mobile devices (Momuna.companion/.pad), standard workstations (Momuna.desk), tracking and location technologies (Momuna.dataloc) databases and servers (Momuna.node) and possibilities to integrate any kind of digital presentation devices and technologies (Momuna.addons).

The Momuna-system enables institutions to get valuable feedback about their own exhibition concepts, the behavior of (young) visitors and how they deal with this new digital museum guide. The technologies and additional case studies help to support schools, classes or any other kind of visitor group within the three major phases of visiting a museum: preparation, execution and postprocessing. This sounds very technical but the concept focuses human-computer interaction principles, user interface modes, and users' groups and their needs.

Figure 7: Momuna -scenario at the museum.

Figure 8: Momuna -components

A main digital device, the Momuna.companion, attends every visitor of the exhibition. Communication, cooperation, orientation, user tracking, and location based services are provided by the technological environment installed in the museum. The intelligent use of these technologies, in combination with the mobile devices, enhanced by its game-based and functional concept, expands the museum to an experience learning environment.

The museum, teachers and their pupils are the three main target groups within the Momuna concept. The kids are divided into three more subgroups: the individual, the buddies and the rest. During a story-based competition, based on educational aspects, the groups have to cooperate and to communicate via their Momuna.companion among each other. They have to collect relevant information to solve their exercices. All relevant or special highlights can be stored for reinforcement. The Momuna.pad of the teacher offers an overview oft his groups, providing him communication functionalities and other useful features to support his educational work.

Figure 9: IZA mobile - basis for Dino-Hunter using the Momuna concept.

Dino-Hunter

The basic methods of interactive digital storytelling, Kids Innovation and the different concepts of user interaction design are combined within the Momuna-based Dino-Hunter, which represents an exemplary demonstrator for an exciting edutainment application in the field of palaeontology exhibitions and museums. The demonstrator shows the possibility to "edutain" pupils providing a story-driven 3D-Puzzle. Experience appliances, complex mixed reality components and storytelling multimodal technologies enable young visitors to become a "Dino-Hunter" during their museum visitation.

Figure 10: Triceratops - typical representative of a fossil artifact within
paleontological departments of museums

Motivated by a basic story, young guests of the palaeontology exhibition get into an experience environment by playing a simulated 3D-Puzzle. The pupils look at artifacts, and identify and collect the virtual fragments of a 3D-Puzzle. In the concept of Dino-Hunter, the parts of the Puzzle are several bones, directly but virtually marked as the dinosaur skeletons. By their digital device and under support of 3D-Tracking technologies they can use their device screen as a window to locate these hotspots. The Momuna companion is used as an archaeological tool within the Dino-Hunter scenario.

In the first phase of Dino-Hunter, all young visitors (e.g. pupils of a school class) have a briefing with group selection and additional information about the simulated time and habitat. All pupils are specialists in finding and collecting bones and fossils. At the central place the group reconstructs a skeleton. Here, the muscles and skin are reconstructed and a complete 3D-model is generated (and optionally animated). Thus, the resulting model is visible from any point of view, using a PDA or any other mobile screening device.

Figure 11: Dino-Hunter scenario - overview and components

The most important tools to implement the Dino-Hunter scenario are simple personal digital assistants (PDAs). They are used to find, clean, transport and visualize fossil artifacts. The only additional hardware needed is a PC with powerful graphic hardware as central server of the scenario (Momuna.node). The PC works as an information broker and the PDAs are the clients. All clients are continuously tracked (Momuna.dataloc) in order to locate each PDA (group of visitors) at any time. Additionally the server tracks and records the viewpoints of each client. Thus each PDA is able to get its virtual camera for real-time visualization later on. To get a more photo realistic view it is possible to use tracked tableau-PCs.

Figure 12: Dino-Hunter - use of augmented reality to show different real and artificial views on virtual triceratops

To identify the marked bones the kids move the mobile device of Dino-Hunter across the artifact. The system detects the real justification of the device and tracks the viewpoint of the visitor. Because of the 3D-based tracking technology, the accurate display window, which is behind the digital companion, is able to simulate exactly the scenario/screen. Close to reality the modelled environment of the museum and of the exhibits will be merged with virtual 3D elements. In the concept of Dino-Hunter the mixed reality component can be the marked bones, which the user has to find or a lot of hot spots, which can be additional information but offered by an immersive media presentation. For example, real dinosaur skeletons can be seen "through" the Dino-Hunter's tool as a real dinosaur full of life. Another example is that you can add organs, muscles, or the skin of the dinosaur and see how it behaves in motion.

Figure 13: Dino-Hunter - use of augmented reality to show different real and artificial views on virtual triceratops.

Many other scenarios are possible for Momuna and Dino-Hunter. The simplest scenario is called "Walk Around and Understand": Each user has his own PDA, handheld or mobile. He walks around in the museum, and his digital assistant enables him to see all artifacts in the way he wants to. It is possible to see fossils with muscles and inner organs or with coloured skin, fletched or even furred. In another scenario, scientists could compare different styles of reconstruction of extinct animals. Dino-Hunter can reconstruct the way a fossil would walk, run, fly or swim. So even scientists could get advantages from using Dino-Hunter. Further on, Dino-Hunter is easily transferrable to other application scenarios such as mobile games for large booths at an exhibition or any other edutainment scenario such as indoor and outdoor event-tourism.

Summary

This paper presents different interdisciplinary concepts for edutainment applications combining computer science technologies, interactive digital storytelling techniques and Kids Innovation enhanced by user interaction principles. Based on these concepts, Dino-Hunter is introduced as a Momuna-based and game-oriented mobile experience appliance scenario for museums. From the content-related point of view, interactive storytelling techniques are used to establish both an exciting story and narrative environment/information appliance for kids. From the technical point of view, multimodal interfaces and mixed reality platforms are used to create natural, human-like and immersive user interfaces and to improve user-friendliness in general.

The concepts provided could be easily adapted to other application scenarios within the wide range of edutainment applications, e.g. e-Commerce applications such as virtual or physical information kiosks, multimodal exhibition booths, cultural heritage or collaborative learn environments.

Acknowledgements

Methods and concepts provided in this paper have been developed by collaboration between the department of Digital Storytelling at the Center for Computer Graphics and the user interaction specialists of ion2s - buero fuer interaction (engl. ion2s - office of interaction), both located in Darmstadt, Germany. Ion2s is responsible for Kids Innovation, Momuna and Transforming User Interface. The department of digital storytelling is responsible for the technical-oriented and storytelling aspects. Dino-Hunter is introduced as a new concept based on intensive collaboration between the two institutions.

Related Links

ZGDV e.V. Digital Storytelling - http://www.zgdv.de/distel

ion2s - buero fuer interaction - http://www.ion2s.com

Kids Innovation - http://www.kidsinnovation.com

Momuna - http://Momuna.ion2s.com

GEIST project, see http://www.tourgeist.de

EMBASSI and MAP projects, see http://www.embassi.de and http://www.map21.de

TELEBUDDY project, see http://www.telebuddy.de

art-E-fact project, see http://www.art-e-fact.org

References

[1] Braun, N, O.Schneider. Conversation modeling as an abstract user interface component. In: Proceedings of GI Workshop Synergien zwischen Virtueller Realität und Computerspielen: Anforderungen, Design, Technologien, Vienna, September 2001.

[2] Cassell J, S.Prevost, J.Sullivan, E. Churchill. Embodied conversational agents. Cambridge, MA: MIT Press, 2000.

[3] Crawford C. Assumptions underlying the Erasmatron interactive storytelling engine, In: Mateas M, Sengers P, editors. Proceedings of the AAAI Fall Symposium: Narrative Intelligence, Technical Report FS-99-01, Menlo Park, CA: AAAI Press, 1999. p. 112-4.

[4] Friedman-Hill EJ. Jess, The Java Expert System Shell, in SAND98-8206 (revised), Livermore, CA: Sandia National Laboratories, 2001.

[5] Grasbon, D, N.Braun. A Morphological approach to interactive storytelling, In: Fleischmann M, Strauss W, editors. Proceedings: cast01//living in mixed realities. Special issue of netzspannung.org/journal, The magazine for media production and inter-media research, 2001.

[6] Laurel, B.: Computers as Theatre, Reading, MA, 1993.

[7] Mateas, M, A.Stern. Towards integrating plot and character for interactive drama. In: Dautenhahn K, editor, Proceedings of the AAAI Fall Symposium: Socially Intelligent Agents: The Human in the Loop, Technical Report FS-00-04, Menlo Park, CA: AAAI Press, 2000, p. 113-8.

[8] Mateas M, P.Sengers. Narrative intelligence, In: Mateas M, Sengers P, editors. Proceedings of the AAAI Fall Symposium: Narrative Intelligence, Technical Report FS- 99-01, Menlo Park, CA: AAAI Press, 1999. p. 1-10.

[9] Murray J. Hamlet on the holodeck: the future of narrative in cyberspace. Cambridge, MA: MIT Press, 1998.

[10] Penrose, R. (1989). The Emperors New Mind. London (U.K.): Oxford University Pres; p. 398.

[11] Perlin K, A. Goldberg. Improv: a system for scripting interactive actors in virtual worlds. Computer Graphics 1996;24(3):205-16.

[12] Propp V. Morphology of the folktale, International Journal of American Linguistics, Part III. 1958;24(4): 1-106.

[13] Raskin, J. (2000) The Humane Interface - New Directions for Designing Interactive Systems. Massachusetts (USA): Addison-Wesley, ACM Press; p 24ff.

[14] Sengers, P. Narrative intelligence, In: Dautenhahn, K, editor. Human Cognition and Social Agent Technology, Advances in Consciousness Series, Philadelphia, PA: John Benjamins Publishing Company, 2000.

[15] Szilas, N. Interactive drama on computer: beyond linear narrative, In: Mateas, M, P.Sengers, editors. Proceedings of the AAAI Fall Symposium: Narrative Intelligence, Technical Report FS-99-01, Menlo Park, CA: AAAI Press, 1999. p. 150-6.

[16] Spierling, U, D. Grasbon , N.Braun IIurgel. Setting the scene: playing digital director in interactive storytelling and creation in Computers & Graphics 26, 2002. p.31-44.

[17] Woodcock S. Game AI: The State of the Industry, in Game Developer Magazine, August 2001.

[18] Müller W, U. Spierling, M. Alexa, I. Iurgel. Design Issues forConversational User Interfaces: Animating and Controlling 3D Faces. In: Proceedings of Avatars 2000, Lausanne, 2000.