Augmenting the Field-of-View of Head-Mounted Displays with Sparse Peripheral Displays

September 29, 2017 | Author: Norman Hoover | Category: N/A
Share Embed Donate


Short Description

1 Augmenting the Field-of-View of Head-Mounted Displays with Sparse Peripheral Displays 1 Microsoft Research One Microso...

Description

Augmenting the Field-of-View of Head-Mounted Displays with Sparse Peripheral Displays Robert Xiao1,2 Microsoft Research One Microsoft Way, Redmond, WA 98052 [email protected] 1

Hrvoje Benko1 2 Carnegie Mellon University, HCI Institute 5000 Forbes Avenue, Pittsburgh, PA 15213 [email protected]

Figure 1. Our sparse peripheral display prototypes: (a) Virtual reality prototype SparseLightVR, with a 170º field of view, showing LED arrangement. (b) SparseLightVR imaged from the inside showing a forest scene. (c) Augmented reality prototype SparseLightAR, with a field of view exceeding 190º. (d) SparseLightAR showing a scene with butterflies and beach balls.

ABSTRACT

ACM Classification Keywords

In this paper, we explore the concept of a sparse peripheral display, which augments the field-of-view of a headmounted display with a lightweight, low-resolution, inexpensively produced array of LEDs surrounding the central high-resolution display. We show that sparse peripheral displays expand the available field-of-view up to 190º horizontal, nearly filling the human field-of-view. We prototyped two proof-of-concept implementations of sparse peripheral displays: a virtual reality headset, dubbed SparseLightVR, and an augmented reality headset, called SparseLightAR. Using SparseLightVR, we conducted a user study to evaluate the utility of our implementation, and a second user study to assess different visualization schemes in the periphery and their effect on simulator sickness. Our findings show that sparse peripheral displays are useful in conveying peripheral information and improving situational awareness, are generally preferred, and can help reduce motion sickness in nausea-susceptible people.

H.5.1. Information interfaces and presentation (e.g. HCI): Multimedia information systems: Artificial, augmented, and virtual realities.

Author Keywords

Augmented reality; virtual reality; wide field-of-view; sparse peripheral display; peripheral vision

Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than the author(s) must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]. CHI'16, May 07 - 12, 2016, San Jose, CA, USA Copyright is held by the owner/author(s). Publication rights licensed to ACM. ACM 978-1-4503-3362-7/16/05$15.00  DOI: http://dx.doi.org/10.1145/2858036.2858212

INTRODUCTION

Head-mounted Augmented and Virtual Reality (AR and VR) devices are increasingly popular due to a recent resurgence in interest, driven by technological improvements such as better displays, processing power and mobility. Such mixed-reality systems offer the promise of immersing a person in a virtual environment or digitally augmenting a person’s vision, enabling diverse applications in areas including gaming, information visualization, medical assistants, and more immersive communication. However, one common limitation of such devices is their limited field-of-view (FOV). The human visual system has a binocular FOV exceeding 180º horizontally, yet current head-mounted virtual reality devices, such as the Oculus Rift, are limited to around 90º horizontally (Figure 3). Augmented reality devices such as Lumus DK-40 glasses and the upcoming Microsoft Hololens are even narrower, at around 40º horizontal (Figure 6). Notably, this means that the users of such displays either see pitch black (in VR) or an absence of virtual content (in AR) in their peripheral vision. This restricted FOV limits the immersive potential of mixed-reality systems, reduces the situational awareness of the person, and leaves the vast information-processing capabilities of the human visual system underutilized. We propose the concept of sparse peripheral displays, which fill the periphery of a head-mounted display using a low-resolution, high-contrast array of diffused, colored LED lights, rendering of which is tightly coupled to the content presented on the device (Figure 1). Because sparse

peripheral displays can be lightweight and inexpensive to construct, they can easily be retrofitted to existing headmounted displays, enhancing existing devices. To demonstrate our concept’s applicability to a range of form factors, we implemented two proof-of-concept implementations: a virtual reality display, called SparseLightVR, and an augmented reality display, called SparseLightAR. We evaluated the effectiveness of our concept using our SparseLightVR prototype in a pair of user studies. In addition to confirming the benefits of increased situational awareness, our experiments showed that participants preferred SparseLightVR, reporting less effects from simulator sickness or nausea compared to the plain VR display. In particular, our participants found our novel peripheral countervection visualization to be particularly beneficial in reducing nausea. Our results confirm that sparse peripheral displays can be used to expand the fieldof-view in mixed-reality environments while also countering the common nauseating effects of enlarged field-of-view [8, 16]. In summary, this paper makes the following three contributions: •





The concept of sparse peripheral displays and two proof-of-concept implementations: a virtual reality display (SparseLightVR) and an augmented reality display (SparseLightAR), along with a detailed description of their hardware and software designs. A peripheral countervection visualization, designed to reduce the effects of simulator sickness by presenting motion stimulation on the retina that counters any additional motions not generated by the user’s head movement itself (e.g., motion derived from controller input). A 17-person evaluation of SparseLightVR testing its ability to provide additional context to mixed-reality interactions, and a follow up study examining the effects of various rendering strategies on simulator sickness.

BACKGROUND AND RELATED WORK

In contrast to foveal vision, which senses detail, color, and textures [27], peripheral vision is much lower resolution and attuned to sensing contrast [1] and motion [20]. Thus, displays targeting the periphery have different requirements than displays for the fovea, leading us to design our sparse peripheral displays to to complement the high-resolution foveal display. FOV and Simulator Sickness

Simulator sickness is a type of motion sickness induced by visual information that conflicts with vestibular and proprioceptive sensory cues [23]. The principal source of simulator sickness is induced perception of self-motion, or vection, caused by motion cues in the visual field that are not corroborated by vestibular sensory data [14]. In turn,

vection is primarily derived from peripheral visual cues [5, 6], with the central visual field playing only a small role. Consequently, research has confirmed that wide FOV displays can often induce simulator sickness more easily than narrow FOV displays [8, 16]. However, because wide FOV displays also result in higher presence and immersion [16], system designers face a difficult decision when choosing an appropriate FOV. In addition, technical difficulties in producing commercial devices with wide FOV (e.g., complicated optics, increased weight etc.) often limit the available FOV, limiting the depth of virtual experiences available to users. Explorations of Wide FOV

Outside of AR/VR, many systems have explored displaying information or visualizations in the visual periphery. Most commonly, these peripheral displays provide additional context to a foveal display, an arrangement referred to as a focus plus context display [3]. Such displays often feature low-resolution context displays paired with a highresolution focus display, similar in concept to the present sparse peripheral displays. Other systems have explored using the periphery to present information and notifications [15], enhance user experiences through immersive peripheral projections [10], or expand the effective size of a display through lowresolution ambient light [26]. Recently, FoveAR [4] combined optically see-through glasses with wide FOV background projection as another example of such focus + periphery hybrid displays. In the HMD space, Jones et al. [11] found that strategically placed static, bright areas in the peripheral visual field resulted in more accurate distance perception by HMD users, suggesting that even static illumination in the periphery can improve HMD experiences. In contrast, Knapp and Loomis found that reducing the FOV did not directly cause distance underestimation [13]. Watson et. al. [28] found that degradation of peripheral resolution did not significantly affect user performance on a complex visual search task. Wide FOV HMDs

A few recent AR research prototypes [2, 18, 19] demonstrate higher FOVs (up to 92º horizontal, compared with 40º for commercial devices) albeit with significantly reduced image quality across the entire display. For example, Pinlight Displays [19] use a special light field display consisting of an LCD backed with an array of point light sources. These displays require significant computational power to render a full light field. Furthermore, the ultimate resolution of the display is diffraction-limited. In VR, a number of systems have attempted to widen the field of view with complex optical configurations. For example, StarVR [9] (210º horizontal) and Wearality Sky [29] (150º diagonal) both use complex Fresnel lenses which

Figure 2. SparseLightVR device. (a) Unmodified Oculus Rift DK2 device. (b) Rift with LED arrays installed. (c) Rift after Delrin diffuser installed. (d) SparseLightVR in operation. Note that the LEDs above the nose are disabled to avoid crosstalk, since those can be seen by both eyes.

are challenging to manufacture and design, and introduce optical distortions which are hard to mitigate. Nagahara et al. [21] (180º horizontal) use a novel optical arrangement (catadioptrical optics), significantly increasing the weight of their device (3.75 kg headset). Other approaches are possible. For instance, Yang and Kim [30] compress the virtual environment to provide higher virtual FOV. Similarly, Orlosky et. al. [24] warp the display using fisheye lenses. Although these warping approaches result in higher virtual FOV, they trade off physical accuracy to do so. Most related to our approach are systems that explicitly place low-resolution displays in the periphery. Nojima et al. [22] demonstrate a prototype peripheral display consisting of four 16x32 LED arrays worn on the user’s head. These display a moving random dot pattern as the user operates a normal computer monitor, increasing the user’s sense of self-motion. Baek et al. [2] place a pair of low-resolution LCDs on the either side of an HMD, but do not evaluate their prototype or describe its implementation in detail.

IMPLEMENTATION DETAILS

Both our VR and AR prototype displays share some of the same core components. The displays consist of a number of individually-addressable RGB LEDs (WS2812S 5050), connected in series to an Arduino microprocessor. These LEDs provide 24-bit color (8 bits per channel), and are all controlled from a single Arduino pin using a cascading serial protocol. Due to the high brightness of these LEDs coupled with their proximity to the eyes, we covered the LEDs with laser-cut diffuser material to absorb some of the light and smooth out the LED hotspots (Figures 2c, 5d). To further reduce the brightness of these LEDs (which are designed for outdoor lighting applications), we undervolted the LEDs, driving them at 3.0 V instead of the recommended 5.0 V. We also performed a quick manual calibration of the LEDs to the central display, adjusting our software’s per-channel scale values so that the LEDs match both the brightness and color gamut of the display. The rendering device continuously streams color data to the Arduino over a serial connection running at 500 kilobits per second. The rendering device is responsible for computing the color value output by each LED. SparseLightVR

Our VR prototype, called SparseLightVR, is built on a commercial Oculus Rift DK2 head-mounted virtual reality device (Figure 2a). This device has a central high-resolution display of 960x1080 per eye, updating at 75 Hz. The binocular high-resolution field-of-view is approximately

Figure 3. LED placements in SparseLightVR relative to the visual FOV of human vision. Purple box: Oculus Rift DK2 binocular FOV. Purple circles: SparseLightVR LEDs. Figure adapted from Jones, et al. [11].

Figure 4. Close-up image from the inside of SparseLightVR. The sparse peripheral display shows the sky (top), ground (bottom), and a nearby off-screen red helicopter (left).

Figure 5. SparseLightAR construction. (a) Our custom AR head-worn display prototype. A Samsung Galaxy S6 is mounted on top, connected to the external IMU and the SparseLightAR controller (left assembly). (b) SparseLightAR LED arrays installed. (c) SparseLightAR with paper diffusers added, in operation showing a scene with flying butterflies (blue).

84º horizontal. The SparseLightVR peripheral display consists of 70 LEDs broken up into four groups, arranged as shown in Figures 3b and 4. Two groups of LEDs are placed in the far periphery inside the left and right edges of the display, and the other two groups are configured as rings in the near periphery surrounding the Oculus’ two eye displays. Laser-cut pieces of Delrin plastic cover the LEDs to diffuse the light (Figure 2c). The total sparse FOV is approximately 170º horizontal (Figure 3). The sparse peripheral display is driven at 100 Hz using an Arduino Duemilanove microprocessor, slightly faster than the Oculus’ native display refresh rate of 75 Hz. SparseLightVR is driven by a desktop computer running Windows 8.1, a Core i7-3770 CPU and an NVIDIA Quadro K5000 graphics card. Applications were developed in Unity 5 using the Oculus VR 0.6 SDK. SparseLightAR

Our AR prototype, called SparseLightAR, is built on a

custom-built head-mounted see-through augmented reality device. This AR device is a modification of the commercial Samsung Galaxy Gear VR device. It consists of a smartphone (Samsung Galaxy S6) mounted flat across the top, connected to an external inertial measurement unit (Figure 5a). Collimating lenses opposite the display split the display output into two halves, one per eye, with each image individually projected at infinity. A half-silvered mirror mounted at a 45º angle between the display and lenses redirects the collimated images onto the eyes. The effect is that the display’s contents are overlaid onto the environment, creating a see-through augmented reality experience. The Samsung S6 display provides a resolution of 1280x1440 per eye at a refresh rate of 60 fps. The horizontal FOV of our AR display is 62º, comparable with similar wide FOV AR systems [7]. The SparseLightAR peripheral display for this prototype consists of 112 LEDs (Figure 5b) updated 75 times per second and arranged as two 4x14 rectangular matrices, one along each edge of our unit. In total, the field of view exceeds 190º horizontally (Figure 6). A DFRobot Beetle microcontroller drives the sparse peripheral display and is connected to the smartphone via USB On-The-Go. All rendering, microcontroller communication and motion tracking is performed on the phone itself without an external computer, and all LEDs are powered from the phone’s USB port. SparseLightAR applications are written in Unity 5 using Unity’s Android SDK and a modified version of the Samsung Gear VR SDK. As the Gear VR SDK is itself based on the Oculus VR SDK, most of our Unity code is common to both SparseLightVR and SparseLightAR. As the LED arrays are opaque, we chose to leave the top and bottom areas of the field of view clear to preserve environmental peripheral cues (Figure 5c). We later discuss future designs employing smaller LEDs that could keep the environment visible in the spaces between LEDs.

Figure 6. LED placements in SparseLightAR relative to the visual FOV. Gray box: Lumus AR [17] FOV for comparison. Purple box: Our custom AR display’s binocular FOV. Purple circles: SparseLightAR LEDs. Figure adapted from Jones, et al. [11].

Scene and LED Rendering

We used the Unity 5 game engine to implement the 3D virtual scenes and scenarios for our sparse peripheral displays. We implemented a reusable rendering component, which we called a light probe, responsible for updating a

Figure 9. Countervection rendering. (a) Sphere with motion bars in Unity. (b) Countervection rendering as it appears in SparseLightVR. Figure 7. Light probes used to render SparseLightVR. In the center, light probes for both eyes overlap.

single LED. The designer can place many such light probes in the scene, matching the arrangement of the LEDs of the sparse peripheral display they are representing (Figure 7). This configuration is easily attached to the user’s view and is updated as the user moves around the scene. For each eye, a virtual camera renders the scene into a color buffer (Figure 8a). The color buffer is partitioned using a Voronoi diagram (Figure 8b), assigning each pixel in the buffer to the nearest light probe. The output color from each probe is the average color (using the CIELAB colorspace) of all pixels assigned to it. In this way, we guarantee that every object in the field of view contributes to at least one light probe, allowing even small details to be seen. Our rendering implementation and light probe configurations are packaged into a prefabricated Unity object, enabling existing projects to be augmented simply by dropping this object in the scene. Furthermore, these light probes can be added to the Oculus VR SDK’s prefabricated headset object, allowing applications using the SDK to automatically support our sparse peripheral displays. The actual color values sent to the LEDs are adjusted to match the colorspace of the central display (converting

from the standardized sRGB colorspace to linear RGB), and then multiplied by predetermined per-channel scale factors, which correct for color gamut mismatches and brightness discrepancies. We took special care to match the gamut and brightness of our sparse peripheral display to the central display, as that makes the experience seamless and comfortable for longer use. Peripheral Visualizations

As has been shown in several focus + periphery hybrid displays (e.g., Focus+Context [3], IllumiRoom [10]), designers have many options when creating content to be displayed in the periphery. They can choose to purely extend the FOV as faithfully as possible, or use the periphery for highlighting of a particular feature or style. We developed a number of proof of concept visualization schemes to explore how to effectively use the sparse peripheral displays in head-mounted displays. In particular, we implemented the following three visualizations: full environment, objects only, and a novel rendering mode called countervection visualization. Full Environment

This is the most naïve visualization, in which the periphery extends the FOV of the main display as faithfully as possible. All objects and scenery are rendered into the sparse display. Objects of Interest Only

As an alternative to rendering the full environment, the designers can choose to render only select objects in the periphery. This can be helpful to draw the person’s attention to a particular object in their periphery that is currently not visible on the main foveal display. Countervection Visualization

Figure 8. LED rendering using light probes. (a) Original rendered color buffer. (b) Colors averaged using a Voronoi diagram with a distance cutoff. Each solid region corresponds to a single light probe.

We contribute a novel visualization particularly suited to peripheral displays, which attempts to counteract motions not generated by physical head movement. We designed this visualization to reduce the negative effects of vectioninduced simulator sickness caused by motion cues in the visual field [14]. Our countervection visualization consists of a series of vertical bars, placed in a spherical arrangement around the person’s avatar and only visible in the sparse peripheral

Figure 10. The forest scene used to introduce users to SparseLightVR. Left: the virtual scene as rendered for the Oculus Rift DK2. Right: scene as it appears inside the unit with SparseLightVR enabled.

information) while also benefiting from the reduced vection (and thus reduced visual-vestibular conflict) of the countervection visualization. Our second study specifically examines the effects of countervection visualization and full environment visualizations on simulator sickness.

display (Figure 9). The sphere does not respond to physical head motions. However, when the user’s avatar moves or rotates (e.g., due to user input), the bars are shifted (via texture manipulation) so that they move in the same direction as the motion, i.e., opposite to what normally would occur. We call these contrary motions countervection cues, as they oppose the motion cues of the central display and thus reduce the feeling of vection (self-motion). For example, if the user walks forward, the bars will also move forward. The bars are configured to move slightly slower than the actual motion, as we found that a 1:1 mapping produced an overwhelming motion effect in the opposite direction during pilot testing.

We conducted two evaluations with 17 participants (6 female, mean age 28). Participants completed the two studies in a single session lasting 60 minutes: a search task where participants moved their head to locate an object, and a simulator sickness evaluation in which participants were asked to evaluate their subjective nausea over a range of rendering conditions.

The countervection visualization is designed to partially cancel the vection experienced in the peripheral regions of the VR display. As such, we aimed to balance the strength of the effect (in terms of brightness) and the speed of the motion with the VR periphery.

Participants were given the option to stop the experiment at any time. Each participant was compensated $10 for participating, regardless of how much they completed. All 17 participants completed the search task, while 14 participants completed the sickness evaluation.

Additionally, we implemented a dynamic switching strategy in which the full environment is rendered while the user is not moving or when the movement comes only from their physical movement of their head, but countervection cues fade in when the user begins to move via external input (e.g., joystick input). This means that we fade in the contervection cues only when the person is experiencing visual-vestibular mismatch. This allows the user to attain the benefit of a wider field of view (better immersion, more

Figure 11. Search task. The user uses a cursor in the middle of the screen to select a white target from a grid of circles. (a) Narrow FOV condition. (b) Regular FOV condition. (c) SparseLightVR condition: the white dot (topright) is rendered to SparseLightVR as an activated LED.

USER EVALUATIONS

Procedure

Participants were first asked about their experiences with virtual reality. Only one person mentioned using virtual

Figure 12. Virtual scene of the first study search task. Target shown at left. The LEDs are configured to only show the target.

reality devices regularly; the remainder had either used them on a small number of occasions just as a trial, or had never used them. The experimenter then had the participant wear the SparseLightVR device and had them move around a simulated forest environment (Figure 10) using a gamepad controller, without the sparse peripheral display enabled. After a minute of familiarizing the participant with the device and the virtual reality experience, the experimenter switched on the sparse peripheral display, which rendered an expanded view of the forest. The participant was asked to describe their reaction and first impressions of the system. After this initial induction phase, the participant began the first study. Study 1 – Search Task

The goal of the first study was to examine the effect of field of view on performance in an object search task. Three fields of view were tested, as shown in Figure 11: a simulated “augmented reality” field of view measuring 25 degrees vertical and 50 degrees horizontal (slightly larger than the field of view of the Lumus AR glasses), a full “virtual reality” field of view employing the Oculus Rift’s full foveal resolution (84 degrees horizontal); and a wide “peripheral” field of view which used the Oculus Rift’s full foveal field-of-view alongside SparseLightVR for a total of 170 degrees horizontal. In this task, SparseLightVR was configured to render in “object-of-interest only” visualization mode. For consistency, we term these tasks, Narrow, Regular, and SparseLightVR, respectively. Participants were presented with a virtual 10x10 grid of circles, arranged in a hemispherical arc 1 meter from the participant and occupying a field of view of 160 degrees horizontal and 80 degrees vertical (shown in Figure 12). Upon beginning the task, a white target would appear in a random hole. The participant was to move their head until a cursor in the center of their gaze aligned with the target, and click a button on the gamepad to select it. This constituted a single trial. The target would then disappear and reappear at a different grid point. The participant performed three blocks of 75 trials each, with each block having a different field of view condition. Condition (block) order was

Figure 13. Chase task. Participants follow the red orb to each waypoint. The orb leaves a long translucent trail for users to follow.

randomized per participant to control for order effects, and the first 25 trials for each block were discarded to reduce learning effects. Following this first study, participants were asked to give their impressions of the system, and then took a short break before the next study. Study 2 – Simulator Sickness

Wide FOV displays are commonly cited in the literature as increasing the sickness and nausea experienced by users of virtual reality devices [8, 16]. Consequently, we sought to evaluate whether sparse peripheral displays suffered from the same limitation and whether our countervection cues helped people reduce the effects of simulator sickness. We compared three conditions: full Oculus Rift FOV without the sparse periphery, extended FOV using the full environment visualization in the sparse periphery to render an extension of the scene, and countervection extended FOV in which the sparse periphery displayed countervection cues while the user was in motion. Our second study proceeded in two parts. In the first part, participants acquainted themselves with our rendering conditions through a simple chase task, following a glowing orb as it moved around a forest clearing (Figure 13). The orb moved along a predetermined path, stopping at 20 waypoints (Figure 14). At each waypoint, the orb stopped to orbit a nearby tree and waited for the participant to approach it; once they moved close enough, the orb moved to the next waypoint. Participants controlled their movement in the game environment using a gamepad and their head movements. The participants repeated this task under each of our three rendering conditions, with the orb moving along the same path each time. Of the 17 participants in this experiment, three participants did not complete this chase task; all three tried the task once or twice and felt too sick to continue to a another repetition. Following each task repetition, participants filled out the Simulator Sickness Questionnaire [12] and were given a five-minute rest to alleviate any lingering nausea.

Figure 14. Orb path for the chase task. The orb’s path consists of a mix of sharp and shallow turns, so that participants try out different turning radii.

Figure 15. Results of the search task in Study 1.

Following the chase task, participants were asked if they were willing to continue with a final, possibly difficult task in which they were involuntarily moved around a space (i.e., moved on “rails”). In this task, participants were able to switch between the three conditions from study 2 using buttons on the gamepad, and try different conditions out as they moved through the space. Participants were asked specifically to rank the conditions in order of comfort based on which conditions they felt minimized their nausea. Participants were given as long as they wanted to make this determination, and were asked to stay until they had fully determined their ranking. Participants who declined the involuntary “rails” task were asked to complete it with voluntary movement instead, i.e., where they both moved themselves around and changed the rendering mode using the gamepad. Of the 14 participants who completed the chase task, three participants chose this option. In the involuntary movement-on-rails task, participants moved along a looping path which repeated until they stopped the experiment. Along both straightaways, the participant’s avatar accelerated gradually to the midpoint then decelerated to a stop. At the end of each straightaway, the avatar jumped about 1.2m into the air to provide some vertical motion, then proceeded to execute the turn and the next straightaway. RESULTS

Although we did not explicitly ask participants to rate our system overall, many participants reacted positively to the system, noting that they “liked this much more” (P2), “[SparseLightVR] felt the most natural” (P12), or that tasks were “a lot easier with [SparseLightVR]” (P4). Two participants (P3, P12) found the LEDs to be too bright, while others found it hard to see cues (P8), suggesting that the brightness of SparseLightVR could be made customizable per-person to improve the experience.

Study 1 – Search Task

Completion time of a trial in this task depended strongly on two factors: whether the next target was visible in the fieldof-view at the start of the trial (i.e., whether the participant had to actively move his head to find the next target), and the angular distance to the next target. We also recorded the total angular distance traveled – due to the Oculus Rift’s orientation smoothing, this total distance incorporates very little noise. We discarded trials in which participants missed the target on their first click attempt (162 trials in all), and trials in which the target appeared in the same location as the previous trial (21 trials in all). Of the remaining 2367 trials, 988 had initially-visible targets. To measure task performance, we use two measurements: the average targeting velocity measure – the ratio between the angular distance to the next target and the completion time – and the movement ratio – the ratio between the total angular distance travelled and the angular distance to the next target. The former measures in effect how quickly the person targets the object, and the latter measurement measures how efficiently they locate the target. Results are summarized in Figure 15. All significant differences were determined through repeated measures Tukey HSD at a p
View more...

Comments

Copyright � 2017 SILO Inc.