Scholarly article on topic 'The Effect on Lower Spine Muscle Activation of Walking on a Narrow Beam in Virtual Reality'

The Effect on Lower Spine Muscle Activation of Walking on a Narrow Beam in Virtual Reality Academic research paper on "Medical engineering"

0
0
Share paper
Academic journal
IEEE Trans. Visual. Comput. Graphics
OECD Field of science
Keywords
{""}

Academic research paper on topic "The Effect on Lower Spine Muscle Activation of Walking on a Narrow Beam in Virtual Reality"

Short Papers

The Effect on Lower Spine Muscle Activation of Walking on a Narrow Beam in Virtual Reality

Angus Antley and Mel Slater

Abstract—To what extent do people behave in immersive virtual environments as they would in similar situations in a physical environment? There are many ways to address this question, ranging from questionnaires, behavioral studies, and the use of physiological measures. Here, we compare the onsets of muscle activity using surface electromyography (EMG) while participants were walking under three different conditions: on a normal floor surface, on a narrow ribbon along the floor, and on a narrow platform raised off the floor. The same situation was rendered in an immersive virtual environment (IVE) Cave-like system, and 12 participants did the three types of walking in a counter-balanced within-groups design. The mean number of EMG activity onsets per unit time followed the same pattern in the virtual environment as in the physical environment—significantly higher for walking on the platform compared to walking on the floor. Even though participants knew that they were in fact really walking at floor level in the virtual environment condition, the visual illusion of walking on a raised platform was sufficient to influence their behavior in a measurable way. This opens up the door for this technique to be used in gait and posture related scenarios including rehabilitation.

Index Terms—Information technology and systems, multimedia information

systems, artificial, augmented, and virtual realities, evaluation/methodology. - ♦ -

1 Introduction

We describe an experiment where participants in an immersive virtual environment (IVE) walked on a virtual narrow raised platform that we call a beam.

When they walked on the virtual beam they exhibited muscle activity that was significantly greater than when they walked on the same narrow path (virtually) located at ground level. Of course all of their walking in the virtual environment really occurred at ground level, only the visual illusion of being above ground realized through the virtual reality displays was responsible for the changes in muscle activity.

An IVE can be characterized by the extent to which participants are able to perceive via normal sensorimotor contingencies [1]—that is, use their body in order to perceive using rules similar to those in physical reality. Normally in physical reality, we perceive visually through knowing how to change our gaze direction, for example, by turning our body, head, and eyes to enable sight of that which is currently 180 degrees behind us, or moving our head so as to see beyond an object that currently obscures a point of interest, or moving our head closer to something in order to be able to see or hear it more clearly. No virtual reality system in existence today can enable the full range of sensorimotor contingencies (SCs) that are possible in physical

• A. Antley is with the Department of Computer Science, Malet Place Engineering Building, University College London, Gower Street, London WC1E 6BT, UK. E-mail: a.antley@cs.ucl.ac.uk.

• M. Slater is with the Institucio Catalana de Recerca i Estudis Avacats (ICREA), Departament de Personalitat, Facultat de Psicologia, Universitat de Barcelona, Avaluacio i Tractaments Psicologics, Campus de Mundet— Edifici Teatre, Passeig de la Vall d'Hebron 171, 08035 Barcelona, Spain, and also University College London. E-mail: melslater@ub.edu.

Manuscript received 5 Apr. 2009; revised 3 July 2009; accepted 16 July 2009;

published online 28 Jan. 2010.

Recommended for acceptance by B. Watson.

For information on obtaining reprints of this article, please send e-mail to:

tvcg@computer.org, and reference IEEECS Log Number TVCG-2009-04-0073.

Digital Object Identifier no. 10.1109/TVCG.2010.26.

reality—for example, constraints on display resolution make close viewing of an object impossible, and the lack of generalized haptics makes the vast majority of haptic perceptual actions impossible in IVEs. Nevertheless, there are systems that approximate this to varying extents—a typical head-mounted display (HMD) with head-tracking allows visual SCs in any direction, but the field of view is often highly constrained and the resolution is orders of magnitude less than natural vision. A multiwall stereo projection system with tracking, such as a Cave [2], [3] also supports an approximation to at least visual SCs, but again, a Cave is highly constrained, though usually with higher resolution than a HMD.

Participants in an immersive virtual environment will typically experience the illusion of being in the place depicted by the displays. We refer to this as Place Illusion (PI) [4]. This has usually been referred to in the literature as "presence" [5], [6], [7], [8], [9], [10], derived from the term "telepresence" [11], the feeling that people operating a teleoperator system might have of being at the remote site of the robot. The term "presence," however, has come to be overloaded with many different possible meanings that go beyond the original concept of the strong illusion of being in a place that is associated with perceiving a remote or virtual space through natural SCs. Therefore we reserve the term PI to refer specifically to the strong illusion of being in the virtual place.

A major hypothesis underlying much of our research in this field over several years is that when PI occurs and when situations and events depicted in the virtual environment belong to a possible and plausible world (not necessarily a physically realizable world) that participants will tend to respond to their virtual experience as if it were real. This response-as-if-real (RAIR) occurs because not only is the situation being depicted a possible one, but the participant is personally "there" as part of it. At some level the brain does not distinguish between reality and virtual reality, and produces automatic subjective, behavioral and physiological responses that correspond to the hypothesis: "this is really happening, and it is happening in my vicinity." Of course, participants know for sure that what is happening is not real, and that they are not in the virtual place, but we are referring here to those immediate and automatic responses that occur before conscious reflection, and which conscious reflection does not impede.

There is substantial evidence that RAIR occurs in a number of specific domains. For example, it has been shown that higher anxiety is reported when people speak to a group of negatively behaving virtual characters compared to a neutral or positively behaving audience [12]. A visual cliff type of environment has been used where participants were in an IVE that shows a precipice, and their heart rate increased significantly [13], [14], [15]. Both skin conductance, heart rate, and heart rate variability have been shown to respond significantly in the context of general social situations [16], and social situations that are highly stressful [17]. It has also been found that there are similarities in behavior when participants play handball in virtual reality compared to physical reality [18]. In some of these examples anxiety, as measured through physiological responses, is one sign that people are responding to the virtual events as if they were really happening. Eye scanpaths have also been demonstrated to change appropriately within an IVE experience [19].

1.1 Hypothesis

Our experiment was designed to test the hypothesis that the increase in muscle activation in response to walking on a reduced area of support above ground level when compared to walking at ground level can be induced within an IVE. Moreover, a comparison is made with the same setup in physical reality. In a real room experimental participants exhibited more frequent

1077-2626/11/$26.00 O 2011 IEEE

Published by the IEEE Computer Society

• •

Fig. 1. The placement position of the EMG sensors on either side of the lower spine.

muscle activations of the extensor muscles of the lower spine when walking on a narrow, raised platform than they did walking on the floor. This prevented their center of gravity from falling outside the narrowed base of support afforded by the platform. Our hypothesis was that when the participants would perceive the raised platform while walking in an IVE modeled on the physical room and platform they would exhibit a similar proportional increase in these muscle activations when compared to walking on the virtual floor.

In previous research the focus of attention has been on subjective responses as elicited by questionnaires, and autonomic nervous system physiological responses such as heart rate. Here, we show that there are measurable muscular changes in the act of walking within an IVE as a function of the type of the area of support that is depicted. Moreover, the direction of changes in muscle activity is the same as those in a similar physical reality, even if the absolute levels are not the same. In Section 2, we describe the materials and methods of the experiment, including the design, scenario, equipment, and method of analysis. In Section 3, we give the results, followed by discussion and conclusions in Sections 4 and 5. The experiments described were approved by the UCL Ethics Committee.

2 Materials and Methods

2.1 Materials

The experiment was conducted in two locations. The first was a Trimension ReaCTor, which is a Cave-like projection based system. The second location was a nearby room in our laboratory. The Trimension Reactor system has three 3 m x 2.8 m back-projected screens: front, left, and right, and a 3 m x 3 m front projection surface on the floor. The system is controlled by a Windows based PC cluster. The computers in the cluster contained Intel Pentium

3.2 GHz processors with 1 gigabyte of RAM and Nvidia Quadro FX 5600 graphics cards. The participants were fitted with shutter glasses that were synchronized with the projectors delivering active stereo at 45 to Hz each eye. Attached to the top of the glasses was an InterSense IS-900 tracking device to track the head of the participant. Also, each participant was fitted with a Mind Media Nexus-4 wireless physiological device that recorded two channels of surface electromyogram (EMG) at 1,024 Hz. For each channel two electrodes were attached to the left and right of the lumbar spine parallel to the L4 and L5 vertebrae, as shown Fig. 1.

2.2 EMG Data Processing

The dependent variable for this experiment was the number of muscle activations or onsets that were extracted from the surface EMG of each participant in each of the six conditions per unit time.

°0 100 200 300 400 500

Time in Samples = 1000/1024 milliseconds

Fig. 2. The rectified filtered signal above shows the detection of two onsets, the first peak crosses the threshold for less than 25 ms and is, therefore, not counted as an onset.

Other things being equal we would expect a rise in number of extracted onsets in the beam compared to the floor conditions.

Briefly, an onset occurs when the filtered and rectified EMG signal stays above a threshold value for more than 25 ms. This threshold value is computed as three times the standard deviation of a baseline signal. In other words an onset is a period of atypically high activity. This method is based on that described in [20].

In our experiment, EMG was recorded for the erector spinae muscles of the lumbar spine using the Nexus-4, which samples at 1,024 Hz and a signal to noise ratio of 53.5 dB for the reference signal of 1 mV at 32 Hz. The differential sensors have a Common Mode Rejection Ratio (CMRR) of 100 dB. We now describe in more detail how the onsets were computed.

The Nexus-4 device has antialias hardware filters to remove all frequencies above 450 Hz. De Luca [21] suggests that the full bandwidth for the surface EMG signal lies between 20 Hz and 500 Hz. Therefore, we used a 3rd order Butterworth filter to high pass filter the raw EMG data above 20 Hz. Next, the resulting signal was rectified by taking the absolute value. We then low pass filtered the resulting data below 50 Hz following the method of [20].

In [20], the threshold for the onsets was computed in order to find the delays for activity onsets for muscles when subjects were asked to deliberately activate those muscles. In those conditions the baseline period for computing the threshold was taken as a 50 ms period while the muscle was "resting." This period is fine for identifying an initial use of a muscle against a background of resting but is not appropriate for extracting onsets from the background noise of walking. Since in this experiment, we were counting the number of onsets that occurred in a given time period, and since we were interested in the comparisons of these counts between different walking conditions, we therefore used the whole signal from the "walking on the floor" condition as our baseline. Therefore, the standard deviation from this data for each participant was computed, and the threshold defined as three times the standard deviation.

In order to then identify the number of offsets in a signal we counted the number of peaks in the filtered rectified data that stayed above the threshold for the duration of at least 25 ms. This is shown in Fig. 2. This threshold and duration were used to compute the number of onsets for each of the six conditions. For each participant we then had onset counts for the left and right sensors for each condition; a total of 12 values.

2.3 Virtual Room Model

The model for this experiment was built using the Blender 3D content creation platform. The model was built to match exactly

Fig. 3. (a) The real room and (b) the virtual room.

the dimensions of the room in our laboratory (Fig. 3). The lighting model was implemented using 3D Studio MAX. In this model the reflections were viewpoint independent.

Static objects in the scene had shadows that were baked into the textures used in the model. The shadows in this lighting model provided the participants with further confirming information that the virtual platform was raised. The model was rendered on the PC cluster using the XVR platform software [22]. The light levels in the virtual and the real room were matched using a Precision Gold lux and light meter that was accurate to within 0.1 lux. It should be noted that, while average light levels were matched, the contrast levels were still different. This meant that the real room was less dim in areas near the light sources than the virtual room.

2.4 Experimental Design

There were 12 male participants in the experiment in a within-groups design, where each experienced the Real Room (RR) and Virtual Room (VR), both of which had three levels. The participants were recruited using a mass email amongst the University College London community. All the participants were healthy males with the mean and standard deviation of age being 25 ± 13 years.

The overall design is shown in Table 1. Six of the participants experienced the RR first and then the VR, and the remainder in the other order. The three conditions in each group were 1) walking across the unmarked floor, 2) the same walk except that the floor was marked by a strip of material (real or virtual), and 3) the same walk except that the participants were on a narrow platform (real or virtual). There are six possible orders of the three conditions, and in each group participants were assigned randomly to the conditions.

Each of the participants in the experiment were required to walk a distance of 235 cm out and back ten times in each of the three real and three virtual conditions. In the beam condition participants did not step off the beam, but turned in place (in both the real and virtual conditions). The room has white walls and a blue carpeted floor, a desk, a chair, and a lamp (Fig. 3). The strip of material flat on the floor in Condition 2 was 235 cm x 14 cm. The platform in Condition 3 had length 235 cm, width 14 cm, and height 13 cm.

In the VR Conditions 1 and 2, the floor of the environment was registered to the same level as the real floor of the Trimension. In VR Condition 3, the top of the platform was registered to the level of the floor of the Trimension. Therefore, the virtual floor was 13 cm below the level of the real floor.

2.5 Procedures

Upon arrival at the laboratory, each participant was given a pre-experiment questionnaire. This recorded the relevant background information for each participant, including age, video game experience, and back pain history. After completing the questionnaire, participants were given a handout describing the procedures of the experiment. This included a warning of the possible side

TABLE 1 Conditions of the Experiment

REAL ROOM VIRTUAL ROOM

(RRf) Walking on the Floor JVRf) Walking on the Floor

(RRs) Walking on the Strip (VRs) Walking on the Strip

(RRb) Walking on the Beam (VRb) Walking on the Beam

effects from experiencing virtual environments, such as nausea and epileptic episodes. When they had read the information sheet, the participants were given the option of withdrawing from the experiment. None of the participants withdrew at this point, but instead signed a standard consent form.

After agreeing to continue, the EMG sensors were placed on the skin along each side of the participant's lower lumbar spine. After the EMG sensors were attached, each participant had the task explained to them. The Nexus-4 device was turned on and the server that recorded the EMG data was started.

The participants were assigned to their factor order (VR or RR first), followed by platform or floor or strip order within the VR or the RR. In all conditions, the participants were asked to walk out and back, the distance of 2.35 m, 10 times, of course on the strip or platform in those conditions. The participants were also instructed to remain in a state of quiet standing for 1 minute before each condition began. A video recorder was then started. In the VR condition the participants had their shoes covered in order to avoid marking the floor (itself a screen).

Between each trial in the Cave (Trimension), the participants were told to look at a floor box cover at the right end of the virtual platform. This was to ensure that they saw the change in height for the virtual platform condition. Also, in the VR conditions events were recorded to indicate when the participant walked back to their starting point. This allowed us to remove from our analysis the data recorded during the time that the participant lost stereo vision in the Cave because they would have been facing the side of the Cave that did not have a screen.

At the end of the final condition, the video and the physiological recording were stopped and each participant was asked the question, "Did you see a raised platform in any of the Cave conditions?" (The answers were all "yes"). The participants were then paid 7 GBP and asked not to discuss this experiment with anyone for at least three months.

3 Results

In this experiment, we were interested in comparing how people walked, as measured by the EMG onset activations, in the real and virtual conditions, and in particular whether the narrowed base of support afforded by a virtual platform would induce a similar balance reaction as that of a real platform. The balance reaction was determined by the number of onset activities in the erector spinae muscles of the lumbar spine.

Fig. 4 shows bar graphs of the mean number of onsets per second in each condition. It is clear that for both the VR and RR conditions that there is a significant difference in mean number of onsets between the platform and the floor, but that the absolute levels are greater for the RR in the beam condition.

The Condition and Subject were the independent variables. Recall that Condition has six levels; for the real room walking on the unmarked floor (RRf), walking on a strip (RRs), and walking on a beam (RRb), and similarly another three conditions for the virtual environment—VRf, VRs, and VRb. Two-way analyses of variance (ANOVA) were carried out with the log of number of onsets per second as the response variable (separately for the left and right sides). The log response was taken since initial analyses revealed

Fig. 4. The mean number of onsets per second for each condition. The whisker represents the standard error for the data for that condition.

that the residuals of the fit did not have a normal distribution. "Subject" was included as independent factor to take account of intersubject variability. "Order" (whether the VR condition was experienced before or after the RR condition) had no effect and is not considered further here.

For the left onsets the ANOVA revealed a significant difference between the six conditions (P < 3.3 x 10"7). Multiple contrast analysis with an overall significance level of P = 0.05 results in:

iRRf ~ |RRs < lRRb; IVRf ~ IVRs < IVRb;

where « means "not significantly different from" and < means significantly less than.

For the right onsets there is a significant difference between the conditions (P < 0.003), and the multiple comparisons tests reveal exactly the same qualitative results as above.

In each case a Jarque-Bera test [23] did not reject the hypothesis of normality of the residuals (P = 0.19 and 0.34 for the left and right ANOVAs, respectively). In each case, one of the 72 readings was removed due to the response being an extreme outlier for both the left and right onsets. These readings, belonging to the same participant, were excluded from the results above.

Comparing the RR with the VR means using multiple contrasts, for the left and right onsets, only the beam RR condition for the real walking is significantly higher than the corresponding VR condition.

For the left onsets there were significant intersubject variations (three subjects had mean number of offsets that were significantly less than some of the remainder with respect to condition VRf). If these subjects are removed then the result for the VR condition in (1) holds for P < 0.1 but not for P < 0.05. For the right onsets one subject has mean significantly lower than the remainder, and when removed the results (1) remain unchanged.

4 Discussion

Overall the results might usefully be framed in the context of a Bayesian model of action, where the probability that the world is in a given state is some function of prior belief about the state of the world and our current sensory input. The action taken is a function of the probability of the current state and the consequences of the action [24]. The increase in extensor muscle activations on the platform is an action that reduces the probability of falling in the RR case. The results of the experiment suggest that this also occurs, though with a lesser intensity in the VR case. The visual sensory information in the VR is convincing enough, despite the lack of

haptic feedback, to make the probability that the person is walking on a raised platform sufficiently above zero for it to affect measurable behavior. This probability combined with our knowledge of the consequences of falling induces the bracing action.

In previous experiments physiological measures have been used to identify stress reactions that were induced by a virtual environment. These stress reactions have been presented as evidence that the virtual display has been accepted as a substitute for reality by the participant (hence, there is RAIR). It could be said that someone watching a movie could have a similar stress reaction. The difference here is the use of whole body sensor-imotor contingencies, the use of the whole body in walking as normal—clearly impossible when playing a traditional computer game or watching a movie. Sitting in a chair, muscles are not stretched and stretch reflexes are not activated as they would be in real walking. Peripheral vision, as can only be simulated in a surrounding virtual display, is crucial for determining the correct response to keep a person from falling.

The results of this paper therefore also help to delineate the differences between the experience of a virtual environment through a desktop interface and immersive systems that inevitably engage the whole body in movement. The issue of engaging the whole body for locomoting through a VR and its positive influence on the reported sense of presence has been shown in a previous questionnaire study [25]. Since we are evaluating an everyday activity such as walking, this result also holds promise as a first step towards establishing what a valid response to a nonstressful virtual environment might look like. Also it holds the promise that IVR might be used successfully in the understanding and treatment of postural and gait related pathologies, for example [26], [27].

5 Conclusion

In this paper, we have described an experiment that used EMG recordings to present evidence that people experiencing an immersive virtual reality react to the stimulus of the virtual environment rather than the real world in which the whole experience is, of course, embedded. Having established that EMG can detect the response of a participant's body to a virtual world, the next step would be to further investigate the sensitivity of EMG to more subtle variations in the conditions. This means that rather than changing the overall content of the scene between conditions, we may change just the sophistication of the lighting by adding dynamic shadows or improving the physics to allow the user to "step up" on to the virtual platform. In order for designers of virtual reality systems to use such a measure in practice it must be shown to be sufficiently sensitive to such differences in configuration.

It should be noted that in spite of the similarity between the RR and VR conditions in changes in the muscle activation between floor, strip, and beam there is a large and significant difference between activation on the beam in RR and the beam in VR. Anecdotally, this was visible to the experimenters who saw that when participants would reach the end of the real beam, they would exhibit greater body sway to keep their balance on turning. It is clear that when a person's foot moves beyond the edge of the platform, they will experience an absence of supporting thrust that confirms the fact that the participant is at height. We have little doubt that this haptic feedback accounted for some of the difference between the real and virtual beam conditions. Further work must attempt to understand these differences, whether due only to cognitive reasons (participants know in VR that they are not on a beam), or a failure in haptics (they do not feel as if they are walking on a wooden beam), or due to visual perception (the difference in the quality of visual images). Much research remains to be done in this area.

Acknowledgments

This research was funded under the EU FET Integrated Project PRESENCCIA Contract Number 27731. Thanks to Dr. Henrik Ehrsson and Dr. Robert Leeb for helpful comments on this paper.

References

[1] A. Noe, Action In Perception. Bradford Book, 2004.

[2] C. Cruz-Neira et al., "The CAVE: Audio Visual Experience Automatic Virtual Environment," Comm. ACM, vol. 35, no. 6, pp. 64-72, 1992.

[3] C. Cruz-Neira, D.J. Sandin, and T.A. DeFanti, "Surround-Screen Projection-Based Virtual Reality: The Design and Implementation of the CAVE," Proc. 20th Ann. Conf. Computer Graphics and Interactive Techniques, pp. 135-142, 1993.

[4] M. Slater, "Place Illusion and Plausibility Can Lead to Realistic Behaviour in Immersive Virtual Environments," Philosophical Trans. Royal Soc., vol. 364, pp. 3549-3557, 2009.

[5] R.M. Held and N.I. Durlach, "Telepresence," Presence: Teleoperators and Virtual Environments, vol. 1, no. 1, pp. 109-112, 1992.

[6] W. Barfield and S. Weghorst, "The Sense of Presence Within Virtual Environments: A Conceptual Framework," Human-Computer Interaction: Software and Hardware Interfaces, G. Salvendy and M. Smith, eds., pp. 699704, Elsevier Publisher, 1993.

[7] T.B. Sheridan, "Musings on Telepresence and Virtual Presence," Presence: Teleoperators and Virtual Environments, vol. 1, no. 1, pp. 120-126, 1992.

[8] T.B. Sheridan, "Further Musings on the Psychophysics of Presence," Presence: Teleoperators and Virtual Environments, vol. 5, no. 2, pp. 241-246, 1996.

[9] J.V. Draper, D.B. Kaber, and J.M. Usher, "Telepresence," Human Factors, vol. 40, no. 3, pp. 354-375, 1998.

[10] M.V. Sanchez-Vives and M. Slater, "From Presence to Consciousness through Virtual Reality," Nature Rev. Neuroscience, vol. 6, no. 4, pp. 332-339, 2005.

[11] M. Minsky, "Telepresence," Omni, vol. 2, pp. 45-52, 1980.

[12] D.P. Pertaub, M. Slater, and C. Barker, "An Experiment on Public Speaking Anxiety in Response to Three Different Types of Virtual Audience," Presence-Teleoperators and Virtual Environments, vol. 11, no. 1, pp. 68-78, 2002.

[13] M. Meehan et al., "Effect of Latency on Presence in Stressful Virtual Environments," Proc. IEEE Virtual Reality Conf., pp. 141-148, 2003.

[14] M. Meehan et al., "Physiological Measures of Presence in Stressful Virtual Environments," ACM Trans. Graphics, vol. 21, no. 3, pp. 645-652, 2002.

[15] M. Slater et al., "Visual Realism Enhances Realistic Response in an Immersive Virtual Environment," IEEE Computer Graphics and Applications, vol. 29, no. 3, pp. 76-84, May 2009.

[16] M. Slater et al., "Analysis of Physiological Responses to a Social Situation in an Immersive Virtual Environment," Presence: Teleoperators and Virtual Environments, vol. 15, no. 5, pp. 553-569, 2006.

[17] M. Slater et al., "A Virtual Reprise of the Stanley Milgram Obedience Experiments," PLoS ONE, vol. 1, no. 1, pp. e39. Dec. 2006, doi:10.1371/ journal.pone.0000039.

[18] B. Bideau et al., "Real Handball Goalkeeper vs. Virtual Handball Thrower," Presence: Teleoperators and Virtual Environments, vol. 12, no. 4, pp. 411-421, 2003.

[19] J. Jordan and M. Slater, "An Analysis of Eye Scan Path Entropy in a Progressively Forming Virtual Environment," Presence: Teleoperators and Virtual Environments, vol. 18, no. 3, pp. 185-199, 2009.

[20] R. Di Fabio, "Reliability of Computerized Surface Electromyography for Determining the Onset of Muscle Activity," Physical Therapy, vol. 67, no. 1, pp. 43-8, 1987.

[21] C. De Luca, "The Use of Surface Electromyography in Biomechanics," J. Applied Biomechanics, vol. 13, pp. 135-163, 1997.

[22] M. Carrozzino et al., "Lowering the Development Time of Multimodal Interactive Application: The Real-Life Experience of the XVR Project," Proc. ACM SIGCHI Int'l Conf. Advances in Computer Entertainment Technology, pp. 270-273, 2005.

[23] A. Bera and C. Jarque, "Efficient Tests for Normality, Homoscedasticity and Serial Independence of Regression Residuals: Monte Carlo Evidence," Economics Letters, vol. 7, no. 4, pp. 313-318, 1981.

[24] K.P. Kording and D.M. Wolpert, "Bayesian Integration in Sensorimotor Learning," Nature, vol. 427, no. 6971, pp. 244-247, Jan. 2004.

[25] M. Usoh et al., "Walking > Walking-in-Place > Flying, in Virtual Environments," Proc. 26th Ann. Conf. Computer Graphics and Interactive Techniques, pp. 359-364, 1999.

[26] H. Sveistrup, "Motor Rehabilitation Using Virtual Reality," J. NeuroEng. Rehabilitation, vol. 1, no. 1, p. 10, 2004.

[27] M.K. Holden, "Virtual Environments for Motor Rehabilitation: Review," Cyberpsychology & Behavior, vol. 8, no. 3, pp. 187-211, 2005.

. For more information on this or any other computing topic, please visit our

Digital Library at www.computer.org/publications/dlib.