HOME
ARCHIVE
BOOKS
ABOUT
CONTACT
Session Chair: Rachel McCrindle
J A Edmans, J Gladman, M Walker, A Sunderland, A Porter and D Stanton Fraser, University of Nottingham/University of Bath, UK
A virtual or mixed reality environment for neurological rehabilitation should simulate the rehabilitation of the task, and not simply simulate the task. This involves identifying the errors in task performance that neurologically damaged patients make during task performance and replicating the guidance given during skilled rehabilitation. Involvement of a skilled therapist in the design and development team is essential. Neurological rehabilitation is complex and replicating it requires compromises between the desire to replicate this complex process and the amount of development time. Virtual or mixed reality systems that can simulate the rehabilitation process are suitable for clinical effectiveness studies.
T Pridmore, D Hilton, J Green, R Eastgate and S Cobb, University of Nottingham, UK
Previous studies have examined the use of virtual environments (VEs) for stroke and similar rehabilitation. To be of real benefit it is essential that skills (re-)learned within a VE transfer to corresponding real-world situations. Many tasks have been developed in VEs, but few have shown effective transfer of training. We believe that, by softening the real/virtual divide, mixed reality technology has the potential to ease the transfer of rehabilitation activities into everyday life. We present two mixed reality systems, designed to support rehabilitation of activities of daily living and providing different mixtures of digital and physical information. Functional testing of these systems is described. System development and user evaluation continues, some of which is described in a sister paper (Edmans et al 2004) in this volume.
R Kizony, N Katz and P L Weiss, Hadassah-Hebrew University, Jerusalem/University of Haifa, Haifa, ISRAEL
The objective of this study was to provide experimental data to support a proposed model of VR-based intervention. More specifically our goal was to examine the relationships between cognitive and motor ability and performance within virtual environments. Thirteen participants who have had a stroke participated in the study. They each experienced three virtual environments (Birds & Balls, Soccer and Snowboard) delivered by the GX- video capture system. After each environment they complete a scenario specific questionnaire and Borg's scale for perceived exertion. Their cognitive, motor and sensory abilities were measured as well. The participants' responses to the VR environments showed that they enjoyed the experience and felt high levels of presence. The results also revealed some moderate relationships between several cognitive abilities and VR performance. In contrast, the motor abilities and VR performance were inversely correlated. In addition, there was a relationship between presence and performance within the Soccer environment. Although these results support some components of the proposed model it appears that the dynamic nature of the virtual experiences would be more suited to comparisons with different measures of motor ability than those used in the current study.
Y S Lam, S F Tam, D W K Man and P L Weiss, Kong Polytechnic University, HONG KONG/University of Haifa, ISRAEL
The rationale, procedures and results of pilot study of the VR system in training street survival skills of people with stroke were presented and discussed. The following main study was also refined from the outcomes to make it more feasible and potentially beneficial to the patients.
R C V Loureiro, C F Collin and W S Harwin, University of Reading, UK
People who have been discharged from hospital following a stroke still have a potential to continue their recovery by doing therapy at home. Unfortunately it is difficult to exercise a stroke affected arm correctly and many people simply resort to using their good arm for most activities. This strategy makes many tasks difficult and any tasks requiring two hands become nearly impossible. The use of haptic interface technologies will allow the reach and grasp movements to be retrained by either assisting movement, or directing movement towards a specified target. This paper demonstrates how initial work on machine mediated therapies can be made available to a person recovering at home.
Session Chair: Hideyuki Sawada
A Lewis-Brooks, Aalborg University Esbjerg, DENMARK
The goal to produce a unique, cost effective, and user-friendly computer based telehealth system product which had longevity and the ability to be integrated modularly into a future internet-based health care communication provision was conceptualised as an aid to home-based self-training. This through motivated creativity with the manipulation of multimedia. The system was to be a supplementary tool for therapists. The targeted group was initially to be those with acquired brain injury. This paper details phase 1 of the product feasibility testing.
N Katz, H Ring, Y Naveh, R Kizony, U Feintuch and P L Weiss, Hadassah-Hebrew University/Tel Aviv University/ University of Haifa, ISRAEL
The goal of this study was to determine whether non immersive interactive virtual environments are an effective medium for training individuals who suffer from Unilateral Spatial Neglect (USN) as a result of a right hemisphere stroke. Participants included 19 patients with stroke in two groups, an experimental group we were given VR-based street crossing training and a control group who were given computer based visual scanning tasks, both for a total of twelve sessions over four weeks. The results achieved by the VR street crossing intervention equalled those achieved by conventional visual scanning tasks. For some measures, the VR intervention even surpassed the scanning tasks in effectiveness. Despite several limitations in this study the present results support further development of the program.
C Sik Lanyi, V Simon, L Simon and V Laky, University of Veszprem/Semmelweis Medical University, Budapest, HUNGARY
Nowadays using and talking about virtual reality (VR) is a very popular subject. VR is an artificial world, a computer mediated environment. The user tries to enter fully into the spirit of her or his role in this unreal-world. Virtual Environment (VE) technology has undergone a transition in the past few years that has taken it out of the realm of expensive toy and into that of functional technology. During the past decade, in the field of Mental Healthcare, the considerable potential of VEs has been recognised for the scientific study. This paper shows the application of VR and presents the VR research in the University of Veszprem. The virtual worlds, introduced below, are developed for treating specific phobias (fear of travelling).
F D Rose, B M Brooks and A G Leadbetter, University of East London, ENGLAND
Assessing one's own driving ability is very subjective, and there are occasions when an objective off-road assessment would be very useful, and potentially life-saving. For example, after physical or mental trauma, or approaching old age, it would be very useful for people to perform their own off-road assessment to help them to decide whether they should resume driving, or continue to drive. It is possible that people might be more likely to accept that it would be inadvisable for them to drive if they had themselves performed such an assessment. We are currently evaluating a virtual reality (VR) based driving assessment which runs on a PC and could be made easily accessible to people in these circumstances. The first stage of the evaluation was to evaluate the performance of drivers and non-drivers on the VR driving assessment and to compare the results obtained across the two groups of participants and with their performance on the Stroke Drivers Screening Assessment (SDSA). The VR driving assessment discriminated between drivers and non-drivers but the SDSA did not. In addition, two measures on the VR driving assessment correlated with drivers' scores on the SDSA.
N Shopland, J Lewis, D J Brown and K Dattani-Pitt, Nottingham Trent University/Learning Disability Services, London Borough of Sutton, Wallington, UK
This article describes the user centred design and development of a virtual environment (VE) to support the training of people with learning disabilities to travel independently. Three separate implementations were built on top of an initial design. Two of these environments implemented intelligent agents to scaffold learners using virtual environments; the third took stakeholder experiences to redesign the initial environment in an attempt to improve its utility.
Session Chair: William Harwin
Y Eriksson and D Gärdenfors, Göteborg University/ Royal Institute of Technology, Stockholm, SWEDEN
The Swedish Library of Talking Books and Braille (TPB) has published web-based computer games for children with different kinds of visual impairments. As the target groups have very different needs when it comes to the use of graphics and sound, TPB have developed two kinds of games. Image-based games aim to encourage children with partial sight to practise recognising visual objects, while sound-based games also intend to be accessible without relying on vision. Based on the results of two pilot studies, this paper discusses central design issues of the graphical and sound-based interfaces for this type of applications.
D Rand, R Kizony and P L Weiss, University of Haifa/Beit-Rivka Geriatric Medical Center, Petach-Tikva, ISRAEL
The main objective of this paper was to investigate the potential of the Sony PlayStation II EyeToy (www.EyeToy.com) for use in during the rehabilitation of elderly people with disabilities. This system is a projected, video-capture system which was developed as a gaming environment for children. As compared to other virtual reality systems such as VividGroup's Gesture Xtreme (GX) VR (www.vividgroup.com), the EyeToy is sold commercially at a relatively low cost. This paper presents three pilot studies which were carried out in order to provide essential information of the EyeToy's potential for use in rehabilitation. The first study included the testing of healthy, young adult participants (N=18) and compared their experiences using the EyeToy system to the GX system in terms of sense of presence, sense of enjoyment, control, success and perceived exertion. The second study assessed the usability of the EyeToy with healthy elderly subjects (N=10) and the third study assessed the use of the EyeToy with stroke patients (N=8). The implications of these three studies are discussed.
T Westin, Stockholm University, SWEDEN
Terraformers is the result of three years of practical research in developing a real-time 3D graphic game accessible for blind and low vision gamers as well as full sighted gamers. This presentation focus on the sound interface and how it relates to the 3D graphic world, and also include post mortem survey results from gamers and comments to those.
A Lewis-Brooks, Aalborg University Esbjerg, DENMARK
Exclusion from the joy of experiencing music, especially in concert venues, is especially applicable to those with an auditory impairment. There have been limited investigation into how to reduce the exclusion for this community in attending classical orchestra music concerts. Through utilizing computer technology and human machine interfaces (sensors and cameras) to stimulate complementary senses through interpretation it is possible to reduce this exclusion. Case studies are presented where the visual and tactile interpretation of the music is able to give new meaning and understanding for such people.
Session Chair: Cecilia Sik Lányi
T Komura, A Nagano, H Leung and Y Shinagawa, City University of Hong Kong, HONG KONG/RIKEN, Saitama, JAPAN/University of Illinois at Urbana-Champaign, USA
In this study, we propose a new method to simulate the effect of fatigue and pathologies in human gait motion. The method is based on Angular Momentum inducing inverted Pendulum Mode (AMPM), which is the enhanced version of 3D linear inverted pendulum mode that is used in robotics to generate biped locomotion. By importing the gait motion captured using a motion-capture device, the value of AMPM-parameters that define the trajectory of the center of mass and the angular momentum are calculated. By minimizing an objective function that takes into account the fatigue and disabilities of muscles, the original motion is converted to a new motion. Since the number of parameters to describe the motion is small in our method, the optimization process converges much more quickly than in previous methods.
O Palmon, R Oxman, M Shahar and P L Weiss, University of Haifa/Technion, ITT, ISRAEL
One of the major challenges facing the professionals involved in the home modification process is to succeed in adapting the environments in a way that enables an optimal fit between the individual and the setting in which he or she operates. The challenge originates primarily from the fundamental characteristic of design - one can see and test the final result of home modifications only after they have been completed. The goal of this study was to address this problem by developing and evaluating an interactive living environments model, HabiTest, that will facilitate the planning, design and assessment of optimal home and work settings for people with physical disabilities. This paper describes the Habitest tool, an interactive model that has been implemented via an immersive virtual reality system which displays three-dimensional renderings of specific environments, and which responds to user-driven manipulations such as navigation within the environment and alteration of its design. Initial results of a usability evaluation of this interactive environment by users are described.
R Haverty, Microsoft Corporation, Washington, USA
Microsoft® Windows® User Interface (UI) Automation is the new accessibility framework for Microsoft Windows and is intended to address the needs of assistive technology products and automated testing frameworks by providing programmatic access to information about the user interface. UI Automation will be fully supported in the Windows platform on "Longhorn" and will be the means of enabling automated testing and accessibility for all new forms of Windows user interface, including existing legacy controls.
O Lahav and D Mioduser, Tel Aviv University, ISRAEL
Mental mapping of spaces, and of the possible paths for navigating these spaces, is essential for the development of efficient orientation and mobility skills. Most of the information required for this mental mapping is gathered through the visual channel. Blind people lack this crucial information and in consequence face great difficulties (a) in generating efficient mental maps of spaces, and therefore (b) in navigating efficiently within these spaces. The work reported in this paper follows the assumption that the supply of appropriate spatial information through compensatory sensorial channels, as an alternative to the (impaired) visual channel, may contribute to the mental mapping of spaces and consequently, to blind people's spatial performance. The main tool in the study was a virtual environment enabling blind people to learn about real life spaces, which they are required to navigate.
Session Chair: Tamar Weiss
U Feintuch, D Rand, R Kizony and P L Weiss, Hadassah-Hebrew University Medical Center/University of Haifa/Beit-Rivka Geriatric Center, Petach-Tikva, ISRAEL
Converging evidence demonstrates the important role played by haptic feedback in virtual reality-based rehabilitation. Unfortunately many of the available haptic systems for research and intervention are rather costly, rendering them inaccessible for use in the typical clinical facility. We present a versatile and easy-to-use software package, based on an off-the-shelf force feedback joystick. We propose that this tool may be used for a wide array of research and clinical applications. Two studies, involving different populations and different applications of the system, are presented in order to demonstrate its usability for haptic research. The first study investigates the role of haptic information in maze solving by intact individuals, while the second study tests the usability of haptic maps as a mobility aid for children who are blind.
A P Olsson, C R Carignan and J Tang, Royal Institute of Technology, Stockholm, SWEDEN/Georgetown University, Washington, DC, USA
The feasibility of performing remote assessment and therapy of patients over the internet using robotic devices is explored. Using a force feedback device, the therapist can assess the range of motion, flexibility, strength, and spasticity of the patient's arm grasping a similar robotic device at a remote location. In addition, cooperative rehabilitation strategies can be developed whereby both the patient and therapist cooperatively perform tasks in a virtual environment. To counter the destabilizing effects of time delay in the force feedback loop, a passive wave variable architecture is used to encode velocity and force information. The control scheme is validated experimentally over the internet using a pair of InMotion2 robots located 500 miles apart.
S A Wall and S Brewster, University of Glasgow, UK
Haptic force feedback devices can be used to allow visually impaired computer users to explore visualisations of numerical data using their sense of touch. However, exploration can often be time consuming and laborious due to the 'point interaction' nature of most force feedback devices, which constrains interaction to the tip of a probe used to explore the haptic virtual environment. When exploring large or complex visualisations, this can place considerable demands on short term memory usage. In this respect, a fundamental problem faced by blind users is that there is no way to mark points of interest or to harness external memory, in a similar way in which a sighted person may mark a graph or table at a point of interest, or leave a note in a margin. This paper describes the design, implementation and evaluation of external memory aids for exploring haptic graphs. The memory aids are 'beacons' which can be used to mark, and subsequently return to, a point of interest on the graph. Qualitative evaluation by visually impaired users showed that external memory aids are a potentially useful tool. The most commonly reported problem was that of using the keyboard to control placing of the beacons. Suggestions for subsequent re-design of the beacons in light of the participants' comments are considered.
A Caffrey and R J McCrindle, University of Reading, UK
This paper describes the creation of a multi-modal website that incorporates both haptics and speech recognition. The purpose of the work is to provide a new and improved method of internet navigation for visually impaired users. The rationale for implementing haptic devices and speech recognition software within websites is described, together with the benefits that can accrue from using them in combination. A test site has been developed which demonstrates, to visually impaired users, several different types of web application that could make use of these technologies. It has also been demonstrated that websites incorporating haptics and speech recognition can still adhere to standard usability guidelines such as Bobby. Several tests were devised and undertaken to gauge the effectiveness of the completed web site. The data obtained has been analysed and provides strong evidence that haptics and speech recognition can improve internet navigation for visually impaired users.
Session Chair: Tomohiro Kuroda
S H Kurniawan, A Sporka, V Nemec and P Slavik, UMIST, Manchester, UK/Czech Technical University of Prague, CZECH REPUBLIC
The paper reports on the design and evaluation of a spatial audio system that models the acoustic response of a closed environment with varying sizes and textures. To test the fit of the algorithms used, the system was evaluated by nine blind computer users in a controlled experiment using seven distinct sounds in three environments. The statistical analysis reveals that there was insignificant difference in user perception of room sizes between sounds in real and simulated scenes. This system can contribute to the area of VR systems used for training blind people to navigate in real environments.
J H Sánchez and H E Flores, University of Chile, CHILE
Diverse studies using computer applications have been implemented to improve the learning of children with visual disabilities. A growing line of research uses audio-based interactive interfaces to enhance learning and cognition in these children. The development of short-term memory and mathematics learning through virtual environments has not been emphasized in these studies. This work presents the design, development, and usability of AudioMath, an interactive virtual environment based on audio to develop and use short-term memory, and to assist mathematics learning of children with visual disabilities. AudioMath was developed by and for blind children. They participated in the design and usability tested the software during and after implementation. Our results evidenced that sound can be a powerful interface to develop and enhance memory and mathematics learning in blind children.
A Lewis-Brooks and S Hasselblad, Aalborg University Esbjerg, DENMARK/Emaljskolan, Landskrona, SWEDEN
This contribution expounds on our prior research, where interactive audiovisual content was shown to support Aesthetic Resonant Environments with (1) brain damaged children -extended here with the addition of (2) learning disabled, Parkinson's disease, and the aged. This paper appraises the experiments involved in preparing, developing and authenticating 'aesthetic resonance' within the Swedish partners' research (1 & 2). It reports on the inductive strategies leading to the development of the open architectural algorithms for motion detection, creative interaction and analysis, including the proactive libraries of interactive therapeutic exercise batteries based on multimedia manipulation in real-time.
J H Sánchez, University of Chile, CHILE
Recent literature provides initial evidence that sound can be used for cognitive development purposes in blind children. In this paper we present the design, development, and usability testing of AudioBattleShip, a sound-based interactive environment for blind children. AudioBattleShip is an interactive version of the board Battleship game, providing different interfaces for both sighted and blind people. The interface is based on spatialized sound as a way of navigating and exploring through the environment. The application was developed upon a framework that supports the development of distributed heterogeneous applications by synchronizing only some common objects, thus allowing the easy development of interactive applications with very different interfaces. AudioBattleship was tested for cognitive tasks with blind children, evidencing that it can help to develop and rehearse abstract memory through spatial reference, spatial abstraction through concrete representations, haptic perception through constructing mental images of the virtual space, and cognitive integrantion of both spatial and haptic references.
Session Chair: Craig Carignan
E A Keshner, R V Kenyon, Y Dhaher and J W Streepey, Rehabilitation Institute of Chicago/Northwestern University, Chicago/ University of Illinois at Chicago, USA
We have united an immersive virtual environment with support surface motion to record biomechanical and physiological responses to combined visual, vestibular, and proprioceptive inputs. We have examined age-related differences during peripheral visual field motion and with a focal image projected on to the moving virtual scene. Our data suggest that the postural response is modulated by all existing sensory signals in a non-additive fashion. An individual's perception of the sensory structure appears to be a significant component of the postural response in these protocols. We will discuss the implications of these results to clinical interventions for balance disorders.
J H Crosbie, S M McDonough, S Lennon, L Pokluda and M D J McNeill, University of Ulster, NORTHERN IRELAND
Virtual reality provides a three-dimensional computer representation of a real world or imaginary space through which a person can navigate and interact with objects to carry out specific tasks. One novel application of VR technology is in rehabilitation following stroke, particularly of the upper limb. Our research group has built a system for use in this field, which gives the user the ability to interact with objects by touching, grasping and moving their upper limb. A range of user perspectives has been tested with healthy individuals and with people following stroke.
M Maxhall, A Backman, K Holmlund, L Hedman, B Sondell and G Bucht, Umeĺ University, SWEDEN
The primary goal of this research was to study a virtual environments (VE) possibility to influence empathy on caregiver personal. In the present explorative study, 9 subjects from Norrlands University Hospital (NUS) completed a test consistent of three everyday tasks, reading a newspaper, filling a glass of water and putting toothpaste on a toothbrush. The procedure was done twice first from a non-stroke perspective and secondly from a perspective of a patient with stroke handicaps. The VE looked like a normal apartment and could be experienced with or without different perceptual disorders of stroke. Data from interviews and observations was analyzed via methods inspired by Grounded Theory. Results from observations and interviews indicate that the simulator in spite of problems of usability were effective in influencing caregivers empathy.
Session Chair: Jaime Sánchez
C Sik Lányi, E Bacsa, R Mátrai, Z Kosztyán and I Pataky, University of Veszprém/National Centre of Brain Vein Diseases OPNI, Budapest, HUNGARY
Aphasia is an impairment of language, affecting the production or comprehension of speech and the ability to read or write. Most common cause of aphasia is - about 23-40 % of stroke survivors - acquired aphasia. The rehabilitation of aphasia is a medical, special treatment (speech therapy), which is the task of a psychologist. It needs long and intensive therapy. More detailed information about therapy can be found in (Engl at al, 1990, Subosits, 1986). In this paper we present our implementation or realization of interactive multimedia educational software to develop readiness of speech for helping the therapy. The software were developed within the frame of youth scientific and MSc thesis works. The first program was developed in Flash, the second in Macromedia Director. The goal of our software is to teach the most important everyday words. The software will be a useful device in the education of children with heavy mental deficiencies. Reading the program you can learn how it works and what current results we have achieved.
H Sawada, N Takeuchi and A Hisada, Kagawa University, JAPAN
This paper presents a digital filtering algorithm which clarifies dysphonic speech with the speaker's individuality preserved. The study deals with the clarification of oesophageal speech and the speech of patients with cerebral palsy, and the filtering ability is being evaluated by listening experiments. Over 20,000 patients are currently suffered from laryngeal cancer in Japan, and the only treatment for the terminal symptoms requires the removal of the larynx including vocal cords. The authors are developing a clarification filtering algorithm of oesophageal speech, and the primal algorithm of software clarification and its effectiveness was reported in the previous . Several algorithms for the clarification have been newly developed and implemented, and are being evaluated by questionnaires. The algorithms were extended and applied for the clarification of the speech by the patients of cerebral palsy.
E Coyle, O Donnellan, E Jung, M Meinardi, D Campbell, C MacDonaill and P K Leung, Dublin Institute of Technology, IRELAND
A common suggested treatment for verbal apraxia is repetition, and the use of slow speech. The required slow speech may be attained by time-scaling ordinary-speed speech. However, when used for this purpose, the quality of the expanded speech must be of a very high quality to be of pedagogical benefit. This paper describes a new method of time-scaling based on the knowledge of speech characteristics, the relative durations of speech segments, and the variation of these durations with speaking rate. The new method achieves a high quality output making it suitable for use as a computer-assisted speech therapy tool.
T Kuroda, Y Tabata, A Goto, H Ikuta and M Murakami, Kyoto University Hospital/Kyoto College of Medical Technology/AMITEQ Corp., Tokyo/Teiken Limited, Osaka, JAPAN
A data-glove available for full degrees of freedom of a human hand is a key device to handle sign language on information systems. This paper presents an innovative intelligent data-glove named StrinGlove. StrinGlove obtains full degrees of freedom of human hand using 24 Inductcoders and 9 contact sensors, and encodes hand postures into posture codes on its own DSP. Additionally, the simple structure of the glove decreases the price. Several sign experts tried the prototype and the results show that the prototype has sufficient recognition rate as a sensor unit and sufficient comfortableness as a glove to wear.
M Papadogiorgaki, N Grammalidis, N Sarris and M G Strintzis, Informatics and Telematics Institute, Thermi-Thessaloniki/Olympic Games Organizing Committee, Athens 2004, GREECE
This paper presents a novel approach for generating VRML animation sequences from Sign Language notation, based on MPEG-4 Face and Body Animation. Sign Language notation, in the well-known Sign Writing system, is provided as input and is initially converted to SWML (Sign Writing Markup Language), an XML-based format that has recently been developed for the storage, indexing and processing of Sign Writing notation. Each basic sign, namely sign box, is then converted to a sequence of Body Animation Parameters (BAPs) of the MPEG-4 standard, corresponding to the represented gesture. In addition, if a sign contains facial expressions, these are converted to a sequence of MPEG-4 Facial Animation Parameters (FAPs), while exact synchronization between facial and body movements is guaranteed. These sequences, which can also be coded and/or reproduced by MPEG-4 BAP and FAP players, are then used to animate H-anim compliant VRML avatars, reproducing the exact gestures represented in the sign language notation. Envisaged applications include interactive information systems for the persons with hearing disabilities (Web, E-mail, info-kiosks) and automatic translation of written texts to sign language (e.g. for TV newscasts).
Session Chair: Tony Lewis-Brooks
N Anderton, P J Standen and K Avory, University of Nottingham, UK
Micro switches used to control sound and light displays increase the level of activity of people with profound intellectual disabilities and provide a means by which they can exert some control over their environments. This study set out to i) explore whether people with profound disabilities could learn to use a simple game controlled by a single micro switch and displayed on a normal computer monitor; ii) document what activities on the part of a tutor best facilitated the performance of the learner. Four men and three women aged between 24 and 46 years with profound disabilities completed eight twice weekly sessions when they were given the opportunity to play a computer game that could be operated by a large jelly bean switch. A tutor sat next to them throughout the sessions and each session was recorded on videotape. Tapes were analysed for the help given by the tutor, use of the switch and duration of attention. Although the game was too difficult for them, all participants increased the percentage of time during the session in which they looked at the monitor and for all of them there were at least two sessions when their switch pressing became a consequence of the tutor's activity.
R Bates and H O Istance, De Montfort University, Leicester, UK
An experiment is reported that extends earlier work on the enhancement of eye pointing in 2D environments, through the addition of a zoom facility, to its use in virtual 3D environments using a similar enhancement. A comparison between hand pointing and eye pointing without any enhancement shows a performance advantage for hand based pointing. However, the addition of a 'fly' or 'zoom' enhancement increases both eye and hand based performance, and reduces greatly the difference between these devices. Initial attempts at 'intelligent' fly mechanisms and further enhancements are evaluated.
S J Battersby, D J Brown, P J Standen, N Anderton and M Harrison, Nottingham Trent University/University of Nottingham,/The Portland Partnership, Nottingham, ENGLAND
The aim of this research is to design, develop, evaluate and manufacture an assistive/adaptive computer peripheral to facilitate interaction and navigation within Virtual Learning Environments and related learning content for people with physical learning and disabilities. The function of the device will be software specific; however the most common primary functions are those of selection, navigation and input.
M A Foyle and R J McCrindle, University of Reading, UK
The main method of interacting with computers and consumer electronics has changed very little in the past 20 years. This paper describes the development of an exciting and novel Human Computer Interface (HCI) that has been developed to allow people to interact with computers in a visual manner. The system uses a standard computer web camera to watch the user and respond to movements made by the user's hand. As a result, the user is able to operate the computer, play games or even move a pointer by waving their hand in front of the camera. Due to the visual tracking aspect of the system, it is potentially suitable for disabled people whose condition may restrict their ability to use a standard computer mouse. Trials of the system have produced encouraging results, showing the system to have great potential as an input medium. The paper also discusses a set of applications developed for use with the system, including a game, and the implications such a system may have if introduced into everyday life.
P J Standen, D J Brown, N Anderton and S Battersby, University of Nottingham/Nottingham Trent University, UK
Virtual environments have a role to play in facilitating the acquisition of living skills in people with intellectual disabilities, improving their cognitive skills and providing them with entertainment. However, the currently recommended devices to allow navigation in and interaction with the environments are difficult to use. Using a methodology established in an earlier study, the study aims to systematically document the performance of users with the currently recommended devices in order to i) inform the design of a usable control device or devices and ii) act as a baseline against which they can be evaluated. 40 people with severe intellectual disabilities aged between 21 and 67 years used four environments with an equal number of sessions with the different devices being evaluated. Results suggest that for navigation, the joystick is better than the keyboard but that for interaction the mouse is better than using the fire button on the joystick. Preventing slippage of the joystick base would make its use much easier and it is suggested that separate devices are retained for navigation and interaction.
Session Chair: David Brown
B Herbelin, P Benzaki, F Riquier, O Renault and D Thalmann, Swiss Federal Institute of Technology, Lausanne/Adult Psychiatry University Department, Prilly, SWITZERLAND
In the context of Cognitive and Behavioural Therapies, the use of immersion technologies to replace classical exposure could improve the therapeutic process. As it is necessary to validate the efficiency of such a technique, both therapists and VR specialists need tools to monitor the impact of Virtual Reality Exposure on the patients. According to previous observations and experiments, it appears that an automatic evaluation of the Arousal and Valence components of affective reactions can provide significant information. The present study investigates a possible solution of Arousal and Valence computation from physiological measurements. Results show that the dimensional reduction is not statistically meaningful, but the correlations found encourage the investigation of this approach as a complement to cognitive and behavioural study of the patient.
L Nyberg, L Lundin-Olsson, B Sondell, A Backman, K Holmlund, S Eriksson, M Stenvall, E Rosendahl, M Maxhall and G Bucht, Umeĺ University, SWEDEN
Injuries related to falls are a major threat to older persons health. A fall may not only result in an injury, but also in a decreased sense of autonomy in the persons daily life. In order to be able to prevent such falls there is a need to further understand the complex mechanisms involved in balance and walking. Here we present an immersive virtual reality system in which a person can move around, while being subjected to various events, which may influence balance and walking.
A Al-khalifah and D Roberts, University of Reading/University of Salford, Manchester, UK
Medical simulation, in particular that used for training and planning, has become an established application of Virtual Reality technology. This application became an active area of simulation development in many academic and commercial institutions around the globe. A reasonable number of successful commercial medical simulators have been launched, while others remain hostage in research laboratories undergoing further developments and improvements. This paper provides a dichotomy of modelling techniques in the context of deformation and cutting, giving examples of how these are applied in medical simulation, comparing their strengths and weaknesses, outlining limitations and pinpoint expectations for the future. We focus on mapping the aim of the simulator to the adoption of particular modelling approaches. A case study pays special attention to the simulation of human organs where we uncover advances and limitations in the application of these modelling approaches.
A A Rizzo, L Pryor, R Matheis, M Schultheis, K Ghahremani and A Sey, University of Southern California/Kessler Medical Rehabilitation Research & Education Corp., West Orange, NJ, USA
Virtual Reality (VR) technology offers new options for neuropsychological assessment and cognitive rehabilitation. If empirical studies demonstrate effectiveness, virtual environments (VEs) could be of considerable benefit to persons with cognitive and functional impairments due to traumatic brain injury, neurological disorders, learning disabilities and other forms of Central Nervous System (CNS) dysfunction. Testing and training scenarios that would be difficult, if not impossible, to deliver using conventional neuropsychological methods are now being developed that take advantage of the assets available with VR technology. These assets include the precise presentation and control of dynamic multi-sensory 3D stimulus environments, as well as advanced methods for recording behavioural responses. When combining these assets within the context of functionally relevant, ecologically valid VEs, a fundamental advancement emerges in how human cognition and functional behaviour can be assessed and rehabilitated. This paper focuses on the results of two studies that investigated memory performance in two VEs having varying levels of functional realism. Within these VEs, memory tests were designed to assess performance in a manner similar to the challenges that people experience in everyday functional environments. One VE used a graphics based simulation of an office to test object memory in persons with TBI and healthy controls and found that many TBI subjects performed as well as the control group. The other study compared healthy young persons on their memory for a news story delivered across three different display formats, two of which used a 360-Degree Panoramic Video environment. The results of this 'in progress' study are discussed in the context of using highly realistic VEs for future functional memory assessment applications with persons having CNS dysfunction.
ICDVRAT Archive | Email | © 1996-2016 Copyright ICDVRAT |