In the first year GUIDE has mainly concentrated on gathering and analysing requirements from the various stakeholders of the project. Further, specifications of the GUIDE framework as well as of user interface technologies and applications have been developed.
On the user requirements side, GUIDE has performed three phases of user trials. In the trials, several approaches have been followed. Initially, users where asked to answer to a questionnaire that collected information about their experience with technology as well as to make a self-assessment of their impairments. In focus-group sessions, further more detailed technical questions were answered, supported by UI mock-ups (e.g. PPT slides showing screenshots) and dedicated video scenes showing multi-modal interaction with a mock-up GUIDE system. These scenes involved elderly actors and UI-mock ups of all GUIDE applications. Furthermore, users performed real interactive tests (see picture) with a fully integrated user test application that covers most of the UI technologies considered for GUIDE. These interactive tests collected preference feedback as well as raw data as a basis for user modelling. Major outcomes of these tests where preliminary guidelines for prototype improvement and application design, first user modelling data, user requirements, modality preferences of users.
For the industrial (developer) requirements, GUIDE has performed a public web-based survey which addressed aspects of accessibility and corresponding industrial requirements. The survey collected data about current practice and features desired for run-time adaptation and design time simulation. The survey had been announced in several networks and mailing lists (e.g. NEM, EDeAN, etc.) and achieved a good response from all stakeholders considered by the project. In addition, GUIDE conducted two dedicated developer focus group sessions in Rennes (see picture right), involving STB developers from Technicolor, ORANGE, SII, SmarDTV. Furthermore, GUIDE was represented in several workshops/events in this period and could collect and analyse requirements from a wide audience. Moreover, a market study was undertaken to identify potential market gaps and to ensure that GUIDE outcomes are relevant for industry and end customers.
The The GUIDE partners have finalised the specification of the GUIDE Framework and Tools. This specification includes approaches and schemes for performing multi-modal adaptation for web applications. The GUIDE Framework is a software framework that can be installed on STBs and connected TVs. It integrates with web application environments (web browsers) and various kinds of UI technologies. The Framework is based on UI component integration technology from the PERSONA framework, relying on communication busses, and it hosts the GUIDE core components Input Adaptation, Multi-Modal Fusion & -Fission, Dialog Manager, Application-, User- and Context Model. The GUIDE toolbox consists of the Simulator (see section below) and a tool that automatically extracts an application model from GUIDE-enabled applications.
GUIDE partners developed a first version of the GUIDE user model. The user model is the basis for user simulation at design-time and UI adaptation at run-time. The model represents knowledge about the user’s impairments, his cognition, perception and motor capabilities and individual preferences. Basic data for this model has been collected in user trials and could already be extracted by cluster analysis. The GUIDE consortium is closely cooperating with other projects in the VUMS cluster (Virtual User Modelling & Simulation).
The consortium has furthermore developed several mixed-fidelity application prototypes for the user trials that took place in GUIDE. GUIDE reference applications where for example represented in paper-based and video-based prototypes. These screenshot designs and videos where shown in the focus groups with elderly test users, and they depicted animated user interfaces and actors that demonstrated multi-modal interaction. Besides the video- and paper-based prototypes, a new application was identified and developed as full functional software prototype. It integrates prototypes of user interface technology in the consortium (gesture recognition, speech recognition, remote control, avatars, etc.) and performs tests with the user in order to measure his capabilities, when he is using the system for the first time (see picture right). This “User Initialisation Application” will become an integral part of the GUIDE framework and will be usable by all GUIDE-enabled applications.
Into this user initialisation application, initial versions of many of the actual UI components intended for GUIDE could already be integrated. This allowed receiving initial results on users interacting in different input and output modalities using real user interface technologies including video-based gesture recognition, multi-touch interfaces, a gyroscopic remote control, standard remote control for input and video rendering, speech synthesis and avatar rendering for output. Other user interface modalities have been simulated during the user tests.
As the GUIDE project is targeting at Web & TV applications and related platforms, the consortium has already started development on the Set-top box platform of partner Technicolor. A first version of a video-based anthropomorphic avatar on the STB has been realised (see picture left), which makes use of existing decoder resources and provides a very efficient way of high-fidelity rendering on low-power CE devices. Further, the consortium has started to migrate the basis of the GUIDE framework to the STB platform. This basis framework is based on a bus architecture and can easily integrate various user interface technologies.
Finally, a first prototype of the GUIDE simulation tool has been developed. It takes as input user interface designs and allows developers to evaluate their designs with respect to various vision- and motor impairments. This means that the developer can perceive the user interface as if he had vision impairments, and he can assess how an impaired person can interact with the user interface layout (see picture right). The simulation is based on a virtual user, which exploits the GUIDE model.