Scanned Voxel Displays

Conventional stereoscopic displays can cause viewers to experience eye fatigue and discomfort, as well as compromise image quality, because these displays require the viewers to unnaturally keep the eye focal length fixed at one distance while dynamically changing the convergence point of the left and right eyes (vergence) to view objects at different distances. Volumetric displays can overcome this problem, but only for small objects placed within a limited range of viewing distances and accommodation levels; they also cannot render occlusion cues correctly. One possible solution – multi-planar scanned voxel displays – is described here.

by Brian T. Schowengerdt and Eric J. Seibel

THE HUMAN VISUAL SYSTEM is exquisitely specialized to perceive three-dimensional (3-D) relationships. A suite of interacting visual processes actively scan the environment, enabling the brain to accurately gauge the relative spatial locations of surrounding objects. When some depth cues are unavailable, the visual system uses the remain-ing cues to make best estimates. Think of how easy it is to walk across a room with one eye closed: the changing size and relative motion of objects, and the fact that nearby objects partially block the view of farther objects, provide sufficient depth cues to avoid running into obstacles. As tasks get more difficult – for example, shooting a basketball or excising tissue with a scalpel – the removal of some depth cues dramatically lowers performance.

Therefore, for difficult spatial tasks involving displays, such as laparoscopic surgery, it is helpful to provide as many accurate depth cues as possible, and the best way to do that it is to utilize a 3-D display. But it is not that simple. The simplest and most common devices for presenting 3-D data are stereoscopic displays; however, though they can create a compelling feeling of depth by including some cues that are not available in 2-D displays, they also generate inaccurate cues that provide conflicting depth information – a situation to which the human visual system reacts very poorly. This imperfect mimicry of 3-D viewing conditions creates sensory conflicts within the visual system, leading to eye fatigue and discomfort.

For 3-D displays to truly achieve their enormous potential, these issues must be overcome. This article presents approaches to building 3-D displays that better mimic reality and do not create conflicts within the visual system, including various volumetric displays that avoid accommodation/vergence decoupling for small objects over a limited range of distances and our scanned voxel displays that overcome the conflicts across an unlimited range of object sizes and distances.

Hardwired Together: Focusing and Aiming the Eyes

When viewing objects in the real world, the information presented to and the demands placed upon the various processes in the visual system are matching and synchronous. One such process, accommodation, controls the focus of the eye's optics. Like a camera, the eye changes its focal power to bring an object at a given viewing distance into sharp focus on the retina (the imaging plane of the eye). Whereas a camera slides a lens forward or backward to shift focus from a distant to a nearby point, the eye stretches or relaxes an elastic lens positioned behind the iris and pupil, to change its convexity.1 A second process, vergence, controls the distance at which the lines of sight of the eyes converge – i.e., the distance at which a viewer is pointing his or her eyes.

When a viewer looks from an object in the distance to a closer object, two things must happen simultaneously in order to see the new object clearly: The viewer must converge both eyes to point at the new object and change the accommodation of the eyes' lenses to bring the object into focus (Fig. 1). These processes need to consistently act in concert when viewing real objects and, accordingly, a hardwired link connects their operation. A movement in one process automatically triggers a synchronous and matching movement in the other process.2

Stereoscopic and autostereoscopic displays provide one image to the left eye and a different image to the right eye, but both of these images are generated by flat 2-D imaging elements such as liquid-crystal-display (LCD) panels. The light from every pixel originates from a flat surface (or, in some cases, two flat surfaces), so the optical viewing distance to each pixel is exactly the same; namely, the distance to the screen. This optical viewing distance acts as a cue to the visual system that all of the objects are flat and located at the same distance, even though the stereoscopic information provides cues that some objects are behind or in front of the screen – this places a demand on accommodation to focus the eye to the distance of the screen. Only when the viewed object is positioned at the actual distance of the screen (i.e., the degenerate case of using the 3-D display as a 2-D display) can the eyes point and focus to matching distances and see a sharp image of the object (left side of Fig. 2). Objects that are positioned stereoscopically behind the screen require the eyes to point behind the screen while focusing at the distance of the screen, i.e., viewers must attempt to decouple the linked processes of vergence and accommodation (right side of Fig. 2), or else the entire display with be blurry (middle of Fig. 2). This forced decoupling is thought to be the major source of eye fatigue in stereoscopic displays,3-5 compromises image quality, and may lead to visual-system pathologies with long-term exposure (especially in the developing visual systems of children).6

Volumetric Displays: Accurate Focus Cues, Limited in Size

Volumetric displays represent an alternative 3-D technology that can create matching accommodation and vergence cues and thereby avoid the conflict generated by stereoscopic displays. One such volumetric display, called the Perspecta display (Actuality Systems), is a swept-screen multiplanar display.7 A circular projection screen (about 25 cm in diameter) spins around its center axis, sweeping the surface of the screen through a spherical 3-D volume. During each refresh cycle, a high-speed video projector projects 198 different 2-D slices of a virtual 3-D object onto 198 orientations of the spinning screen. Each point on the 3-D object is represented by a voxel (most simply defined as a three-dimensional pixel) within the 3-D volume, and light coming from that voxel reaches the viewer's eyes with the correct cues for both vergence and accommodation. Another volumetric display, the DepthCube (Lightspace Technologies), also uses a high-speed video projector to project multiple 2-D slices throughout a 3-D volume.8 Rather than sweeping a screen through the volume, however, the DepthCube contains a stack of 20 liquid-crystal scattering shutters. At any given instant of time, 19 of the 20 shutters are almost transparent, while one active shutter acts as a scattering rear-projection screen. The active state "sweeps" through the shutter stack in a fashion functionally similar to the sweeping screen of the Perspecta display.

 

Figure_1_hi-res_tif

Fig. 1: The linked operation of accommodation and vergence when viewing real 3-D objects. Left: As a viewer looks at the house in the distance, the lines of sight (black dotted lines) of his/her eyes are converged to point at the house, while the eyes' lenses (ovals near the front of each eye) are accommodated to a matching distance to focus the light reflected from the house (blue solid lines) onto the retina to form a sharp image of the house. Because the tree is at a different viewing distance, the light reflected from the tree in the foreground (solid green lines) comes to focus behind the retina and is blurred in the retinal image. Right: As the viewer shifts gaze to the tree, the eyes simultaneously converge to the distance of the tree and increase the convexity of the lenses to accommodate to the matched distance. The increased optical power of the lens brings the light reflected from the tree into focus on the retina, while the light reflected from the house shifts out of focus.

 

Though these volumetric displays create matching accommodation and vergence demands for the objects they display, they possess some disadvantages. A primary drawback is that the objects they depict are of limited size – they must physically fit within the scanned 3-D volume, such as within the 25-cm-diameter sphere of the Perspecta display. These displays cannot place two objects on opposite sides of a table, much less place objects on the distant horizon, and can only shift the focal level of objects through an accordingly small range of accommodation. A second disadvantage is that they do not represent occlusion correctly. Every voxel is visible to the viewer, even if that voxel represents a point on the opposite side of the object that should not be visible from that angle. Additional difficulties stem from the computational demand placed by the large number of voxels and the difficulty in leveraging conventional video-card technology to handle this load. For instance, each of the 198 slices of the Perspecta display has a resolution of 768 x 768 pixels, i.e., the display must render over 116 million pixels per frame, making computation of moving video infeasible with current graphics processing units.

A fixed-viewpoint volumetric display has been developed by Akeley and colleagues that, unlike multi-view volumetric displays, correctly renders occlusion cues.9 The approach used is somewhat similar to that of the DepthCube, in that image slices are placed at fixed distances with a volume. However, rather than sequentially turning those slices off and on, the slices are always on and optically superimposed using beamsplitters. A separate stack of slices is used to create the 3-D volume displayed to each eye, allowing accurate occlusion and viewpoint-dependent lighting effects to be presented. There are, however, only three slices in the current prototype, and accurate focus cues for an object are only produced when it is located at the distance of one of the three slices. The light loss associated with optical combining using beamsplitters makes a significant increase in the number of layers problematic. The prototype also is constrained to a maximum depth of 22.4 cm with focus cues, as the slices are placed 31.1, 39.4, and 53.5 cm from the viewer.

 

fig_2_tif

Fig. 2: Left: There is only one correct position for accommodation when viewing conventional stereoscopic displays. Even though the house and tree are at different stereoscopic distances (there is greater binocular disparity between the trees than there is between the houses rendered in the left and right stereo-images), they will either both be in focus (if the viewer is accommodated to the one correct distance) or both be out of focus (if the viewer accommodates to any other distance). Middle: If the viewer shifts his/her gaze to the tree in the foreground, the change in vergence triggers a matching involuntary shift in accommodation, causing the entire display to become blurry. Right: To bring the display back into focus, the viewer is forced to decouple the linked processes and keep accommodation fixed at the distance of the house while converging to the distance of the tree.

 

In order to recreate the full range of real-world depth perception, a 3-D display must be able to place pixels or voxels at optical distances ranging from the near point of accommodation (a focus distance of around 7 cm in a young viewer) to infinitely distant. We have developed a number of scanned voxel displays that overcome the accommodation/vergence conflict like volumetric displays but can also place objects anywhere from 6.25 cm from the viewer's eye to infinitely far away – surpassing the range required to match the full range of accommodation.

Approaches to Creating Scanned Voxel Displays

Scanned pixel displays such as the Virtual Retinal Display10,11 biaxially scan a color- and luminance-modulated beam of light, serially moving a single pixel in 2-D across the retina to form an image (Fig. 3).

We have integrated a variable-focusing element into a scanned light display to enable a voxel to be triaxially scanned throughout a 3-D volume (Fig. 4). Unlike volumetric displays, the light is not projected onto a screen (moving or otherwise) but rather creates a 3-D volume of light that is viewed directly by the eye. By positioning the 3-D volume between the surface of a lens and its focal length, the 3-D volume can be magnified to occupy a virtual space stretching from the lens to the distant horizon. As when viewing real 3-D objects, the eyes can focus upon different points within the 3-D volume.

We have designed and constructed a number of scanned-voxel-display prototypes using this approach, which are described in detail elsewhere,12–14 but we will briefly describe a recent prototype that presents full-color stereoscopic multi-planar video directly to each eye, using a scanning beam of light. Before the beam is raster-scanned in the X- and Y-axes, it is first "scanned" in the Z-axis with a deformable membrane mirror (DMM) MOEMS device from OKO Technologies (Fig. 5). The DMM contains a thin silicon nitride membrane, coated with a reflective layer of aluminum, stretched in front of an electrode. The shape of the reflective membrane is controlled by applying bias and control voltages to the membrane and electrode. With no applied voltage (left side of Fig. 5), the membrane forms a flat mirror and a collimated beam reflected from its surface remains collimated. With an applied voltage, the reflective membrane is electrostatically deflected toward the electrode, forming a concave parabolic surface that will focus a beam of light to a near point (right side of Fig. 5). Intermediate voltage levels shift the focal point anywhere between the near point and optical infinity (i.e., a collimated beam).

After being scanned in the Z-axis with the deformable membrane mirror, the beam is scanned in the X-axis with a spinning polygon mirror (Lincoln Laser Co.) and scanned in the Y-axis with a galvanometric mirror scanner (Cambridge Technologies), completing the tri-axial scan. This 3-D scanned voxel volume is optically divided with fold mirrors and relayed to the left and right eyes. The top of Fig. 6 presents a graphical overview of the complete optical system.

 

Figure_3_hi-res_tif

Fig. 3: A scanned pixel display projects a beam of color- and luminance-modulated light into the eye, and the lens of the eye (to the right of the pupil) focuses the beam to a point on the retina, creating a pixel. As the beam is scanned biaxially (scanner shown as the white box at the left), the pixel moves across the retina, forming a 2-D image. Only three pixels are shown, for simplicity of illustration.

 

Figure_4_hi-res_tif

Fig. 4: In the scanned voxel display, a modulated beam is triaxially scanned throughout a 3-D volume that is viewed directly by the eye. For simplicity, only two image planes and five voxels are shown. In the top image, the viewer is accommodating to the distant horizon, with the far rear plane in the volume in focus on the retina (the foci are represented by two green circles). Graphics in that far plane (e.g., distant mountains and clouds) will be in focus for the viewer, while graphics in the other planes will be blurry proportionally to their distance from the viewer's point of focus (represented by the three foci behind the retina – notice how their light is diffusely spread when it reaches the retina). In the bottom image, the viewer has shifted accommodation to a near point, increasing the optical power of the eye's lens. Now, the front plane of the volume is in focus on the retina, bringing graphics in that plane (e.g., a branch from a nearby tree) into sharp focus for the viewer, while mountains and clouds in the far plane are shifted out of focus (the foci are in front of the retina, and the light is diffuse when it reaches the retina).

 

Figure_5_hi-res_tif

Fig. 5: The deformable membrane mirror (DMM) is used to dynamically change the focus of the beam before it is XY-scanned. The beam is shown entering from the bottom of the figure and being reflected to the right. If no voltage is applied across the membrane and electrode (left side of the figure), the membrane remains flat and doesn't change the focus of a beam reflected from its surface. If a voltage is applied (right side of the figure), the membrane electrostatically deflects toward the electrode, creating a concave parabolic mirror that shifts beam focus closer.

 

In this proof-of-concept prototype, two planes are scanned frame-sequentially into the eye. To provide the video content for the display, two images are presented in a "page-flipping" mode, in which even frames from the 60-Hz refresh rate are used to present one image, while the odd frames are used to present the second image. In synchronization with the page flipping of the images, the DMM shifts the focus of the scanning beam, such that the two images are projected to different depth planes, creating a two-plane voxel volume. The viewer perceives the superposition of the two planes as one composite multi-layer image. By naturally accommodating the eyes, the viewer can bring objects in the background [Fig. 6(a)] or foreground [Fig. 6(b)] into focus on his/her retina. By rendering an object to a plane in the volume that matches its stereoscopic viewing distance, the cues to accommodation and vergence are brought into correspondence. Figure 7 shows sample photographs of multi-layer images displayed on the prototype.

Objectively Measuring Focal Range and Accommodation

In order to assess the full focal range of the prototype, we measured the diameter of the scanning beam at multiple locations with a beam profiler and used these measurements to calculate the degree of divergence of the beam across a range of DMM control voltages. The beam divergence data were, in turn, used to calculate the viewing distance of the virtual image and the amount of accommodation needed to bring the image into focus (Fig. 8). Virtual images displayed with the prototype can be shifted from 6.25 cm from the eye (closer than the near point of human accommodation) to optical infinity. Figure 9 shows objective measurements of the diopter power of accommodation (1/focal length) of human subjects to the display, taken with an infrared autorefractor (for more details, see Refs. 15 and 16). Subjects accurately shifted accommodation to match the image plane as it was optically shifted forward and backward with the DMM.

 

Figure_6_hi-res_tif

Fig. 6: The viewer brings different depth planes into focus by naturally shifting the accommodation of the eyes' lenses. By changing the voltage to the DMM rapidly, a frame-sequential multi-planar image is generated. (a) The viewer accommodates his/her eye to the distance and thus the house in the background plane is in focus while (b) the tree in the foreground plane is somewhat blurred. The viewer accommodates near, bringing the tree into focus on the retina while the house is shifted out of focus.

 

An interesting finding from our prior research is that the human accommodation response to the scanned voxel display is dependent upon the diameter of the scanning beam. When the scanning beam is greater than 2 mm in diameter, subjects accommodate accurately and consistently. However, if the diameter of the beam is reduced to 0.7 mm, the display creates the virtual equivalent of a pinhole lens – the depth of focus of the display increases and accommodation begins to operate in an open feedback loop and become more variable, both within and between subjects.17-20

Current Challenges for the Single-Focus-Modulator Approach

We have described a proof-of-concept prototype that frame-sequentially projects two planes in a voxel volume, providing a limited degree of resolution in the Z-axis. One way to improve this resolution is to increase the number of frame-sequentially presented planes, mimicking the arrangement of volumetric displays, such as the swept-screen displays discussed in the introduction. However, unlike such volumetric displays, our scanned voxel displays are not limited to varying the Z-axis of voxels on a frame-by-frame basis. Indeed, it is not very computationally efficient to create a full 3-D voxel array since, for any given scene, the majority of voxels are not actively used to represent objects. A more elegant solution is to create a two-and-a-half dimensional (2.5-D) sculpted surface of voxels, in which there is one voxel per XY coordinate, and the Z-axis position of that voxel can be dynamically adjusted with a single focus modulator. This solution is more computationally efficient and better able to leverage conventional video-card architecture, as the display can be driven with a 2-D source image paired with a depth map of Z-axis values. For each refresh cycle of the display, the beam is moved in a 2-D XY raster, using the color and luminance data from the 2-D source image to control the intensities of the RGB light sources and the depth map to dynamically control the position of a single focus modulator on a "pixel-sequential" basis. Unfortunately, current DMMs are only capable of kHz focus modulation rates, rather than the MHz rates necessary to vary the focus of the beam on a pixel-sequential basis.

 

Figure_7_hi-res_tif

Fig. 7: Photographs taken of multi-layered images displayed on the prototype scanned voxel display. Left: The camera is focused on the far voxel plane, which portrays a brick wall with green text. In the top photo, a voxel plane containing an image of a spider web is in front of the camera's plane of focus (analogous to a human viewer's point of accommodation). In the bottom photo, the voxel plane with the spider web is optically shifted with the DMM to align with the rear voxel plane. Middle: The display can also be used in an see-through augmented reality mode, in which the voxel image is presented to the eye with a beamsplitter, enabling virtual objects to be optically placed within the real world. The camera is focused near the front voxel plane, which portrays a spider web. The rear voxel plane, containing a stone wall and yellow airplanes, is behind the camera's plane of focus. Right: In the top photo, both voxel planes are aligned on the Z-axis, and the camera is focused at this point, yielding a uniformly focused image. In the middle and bottom photos, the voxel planes are separated and the camera's focus is shifted between the front and rear voxel plane.

 

Figure_8_hi-res_tif

Fig. 8: Optical distance to (right axis) and accommodation required to focus (left axis) a plane in the scanned voxel display, plotted as a function of the voltage used to drive the DMM. The diopter power of ocular accommodation required to bring the image into focus is equal to the negative inverse of the distance to the virtual image as measured in meters.

 

Solid-state electro-optical materials promise a faster alternative to deformable membrane mirrors. New electro-optical polymers are being developed at the University of Washington,21 which will enable spatial light modulators that can operate at GHz rates – exceeding the speed requirements to perform pixel-sequential focus adjustment with a single modulator. As we await availability of these faster modulators, we are now developing scanned voxel displays that contain multiple focus channels in parallel, in order to overcome the speed limitations associated with a single modulator using currently available technology.

Scanned Voxel Displays Using Multiple Light Sources

In the prototype described above, a single RGB composite light beam is focus-modulated and scanned into the eye. We are nearing completion of a next-generation scanned voxel display that contains multiple RGB beams, each of which are placed at different focus levels before they are optically combined (see Fig. 10). The composite multi-focal RGB beam is then XY-scanned into the viewer's eyes with each component beam creating a different plane in a voxel volume, creating a layered multi-focal virtual image that appears to float in space. Unlike the prior prototype in which multiple planes are produced frame-sequentially, the new display generates the multiple planes simultaneously. The differences in focus between beams can be created by using fixed lenses (or mirrors) with different optical powers or by placing non-collimated light sources at different distances from a lens. As an alternative to using fixed-power lenses to create focus differences, each light source can be provided with a separate dedicated focus modulator (e.g., DMM). Doing so provides the advantage that the Z-axis spacing between planes can be dynamically adjusted to be optimal for a given scene, a given viewer, or a given state of the observer (for instance, if an eye-tracker or accommodation tracker is available, then the planes can be shifted to most densely represent the viewer's region of interest). Each object in a virtual scene can also be assigned to a separate focus layer, and as that objects moves, the focus of the layer can be adjusted with the focus modulator to follow the object in depth.

One advantage to using multiple light sources to create different planes is that multiple focus distances can be presented along the same line of sight, enabling pixel-accurate depictions of transparency and reflections to be presented. For instance, a scene can be rendered in which a fish swimming under the surface of a lake and a reflection of a far away mountain from the lake surface can be seen overlapped, with the fish and mountain placed at different optical distances.

Conclusion

As we have discussed, conventional stereoscopic displays create fatiguing cue conflicts in the visual system between accommodation and vergence because viewers are forced to focus their eyes at one distance and point them at a different distance. Current multi-viewpoint volumetric displays can only overcome this conflict for small objects over a limited range of focus distances and cannot render occlusion cues correctly. We have presented two approaches to building 3-D scanned voxel displays that better mimic natural vision, projecting objects of any size at viewing distances from 6.25 cm to optical infinity and overcoming the cue conflict throughout the full range of human accommodation.

 

Figure_9_hi-res_tif

Fig. 9: The observed accommodation responses to two prototype scanned voxel displays plotted as a function of the objectively measured focus levels of the displays. Red circles represent accommodation responses of 10 subjects (averaged over time and subject) while viewing a display with a 3.5-mm-diameter exit pupil. Green squares represent the average (over time) accommodation response of a single subject viewing a display with an exit pupil between 2.9 and 1.6 mm. Least-squares linear regression lines have been fitted to each data set.

 

Commercial realizations of our prototype scanned voxel displays can include a lightweight head-mounted display (HMD), ideal for wearable computing and augmented reality applications, or a stand-alone desktop display, designed to be viewed from a distance. Using batch microfabrication techniques, the MOEMS scanners can be produced at low cost. Red laser diodes are inexpensive, allowing portable monochrome red scanned voxel displays to be manufactured affordably. A portable full-color system would currently require higher manufacturing costs. Blue laser diodes are expensive and have shorter lifetimes, but it is anticipated that both cost and lifetime will improve in the next few years. Small prototype green semiconductor lasers capable of MHz-rate luminance modulation have been demonstrated by Corning, Novalux, and OSRAM and will soon reach large-scale commercial production.

Non-fatiguing 3-D displays can be used for all 3-D viewing applications for which conventional stereoscopic systems are typically used. There are, however, some applications for which they are critical. Surgeons are increasingly using minimally invasive methods (e.g., endoscopy and laproscopy) that require looking at displays for many continuous hours. 3-D displays enable surgeons to better guide endoscopes around obstructions within the narrow spaces of the body, but doctors must remain in top mental form throughout long surgeries, so it is crucial that these displays be non-fatiguing and comfortable. The guidance of minimally invasive surgery tools is a form of teleoperation, and other forms of teleoperation, such as the piloting of remote UAVs (Unmanned Autonomous Vehicles), also can greatly benefit from 3-D displays that can be comfortably viewed for extended durations. Finally, as 3-D displays are used for video games, we should not present young children with sensory conflicts that could lead to pathologies in their developing visual systems. While surgeons must spend hours concentrating on displays during surgery, children often voluntarily spend even longer periods concentrating on video-game displays.

 

Figure_10_hi-res_color_tif

Fig. 10: Overview of next-generation scanned voxel display. Multiple pixel streams are generated with separate RGB light sources. Each is placed at a different focus level before being optically combined and XY-scanned to form two 3-D volumes, one viewed by the left eye and one by the right.

 

Acknowledgments

This research was supported by a grant from the National Science Foundation Major Research Instrumentation program.

References

1H. L. F. v. Helmholtz and A. P. König, Handbuch der physiologischen Optik. (Leipzig, L. Voss., 1909).

2E. F. Fincham, "The accommodation reflex and its stimulus," British Journal of Ophthalmology 35, 381-393 (1951).

3M. Mon-Williams, J. P. Wann, and S. Rushton, "Binocular vision in a virtual world: Visual deficits following the wearing of a head-mounted display," Ophthalmic & Physiological Optics 13, 387-391 (1993).

4P. A. Howarth, "Empirical studies of accommodation, convergence and HMD use," Proceedings of the Hoso-Bunka Foundation Symposium: The human factors in 3-D imaging (1996).

5D. M. Hoffman, A. R. Girshick, and M. S. Bank, " 'Vergence' accomodation conflicts hinder visual performance and cause visual fatigue," J. Vision 8(3):33, 1-30 (2008).

6S. K. Rushton and P. M. Riddell, "Developing Visual Systems and Exposure to Virtual Reality and Stereo Displays: Some Concerns and Speculations about the Demands on Accommodation and Vergence," Applied Ergonomics 30, 69-78 (1999).

7G. E. Favalora, J. Napoli, D. M.Hall, R. K. Dorval, M. G. Giovinco, M. J. Richmond, andW. S.Chun, "A 100-Million-Voxel Volumetric Display," Proc. SPIE 4712, 300–312 (2002).

8A. Sullivan, "A Solid-State Multi-Planar Volumetric Display," SID Symposium Digest Tech Papers 34, 1531-1533 (2003).

9K. Akeley, S. J. Watt, A. R. Girshick, and M. S. Banks, "A Stereo Display Prototype with Multiple Focal Distances," ACM Transactions on Graphics 23, 804-813 (2004).

10T. A. Furness and J. Kollin, J. Virtual Retinal Display, U.S. Patent #5467104 (1995).

11R. Johnston and S. Willey, "Development of a commercial Virtual Retinal Display," Proc SPIEHelmet- and Head-Mounted Displays and Symbology Design Requirements 2464, 2-13 (1995).

12N. L. Silverman, B. T. Schowengerdt, J. P. Kelly, and E. J. Seibel, "Engineering a retinal scanning laser display with integrated accommodation depth cues," SID Symposium Digest Tech Papers 34, 1538-1541 (2003).

13B. T. Schowengerdt, E. J. Seibel, N. L. Silverman, and T. A. Furness, "Stereoscopic retinal scanning laser display with integrated focus cues for ocular accommodation," in A. J. Woods, J. O. Merritt, S. A. Benton, and M. T. Bolas (eds.), Stereoscopic Displays and Virtual Reality Systems XI, Proc. SPIE-IS&T Electronic Imaging 5291, 366-376 (2004).

14B. T. Schowengerdt and E. J. Seibel, "True 3-D Scanned Voxel Displays Using Single or Multiple Light Sources," J. Soc. Info. Display14(2), 135–143 (2006).

15S. C. McQuaide, E. J. Seibel, J. P. Kelly, B. T. Schowengerdt, and T. A. Furness, "A retinal scanning display system that produces multiple focal planes with a deformable mem- brane mirror," Displays 24, 65-72 (2003).

16B. T. Schowengerdt, E. J. Seibel, N. L. Silverman, and T.A. Furness, "Binocular retinal scanning laser display with integrated focus cues for ocular accommodation," in A. J. Woods, J. O. Merritt, S. A. Benton, M. T. Bolas (eds.), Stereoscopic Displays and Virtual Reality Systems X, Proceedings of SPIE-IS&T Electronic Imaging 5006, 1-9 (2003).

17H. Ripps, N. B. Chin, I. M. Siegel, and G. M. Breinen, "The effect of pupil size on accommodation, convergence, and the AC/A ratio," Investigative Ophthalmology 1, 127-135 (1962).

18R. T. Hennessy, T. Iida, K. Shina, and H. W. Leibowitz, "The effect of pupil size on accommodation," Vision Research 16, 587-589 (1976).

19P. A. Ward and W. N. Charman, "Effect of pupil size on steady-state accommodation," Vision Research 25, 1317-1326 (1985).

20P. A. Ward and W. N. Charman, "On the use of small artificial pupils to open-loop the accommodation system," Ophthalmic & Physiological Optics 7, 191-193 (1987).

21L. R. Dalton, "Organic Electro-Optic Materials," Pure and Applied Chemistry 76, 1421-33 (2004). •

 


Brian T. Schowengerdt and Eric J. Seibel are with the Human Interface Technology Labora-tory and the Department of Mechanical Engi-neering, University of Washington, Box 352142, Seattle, WA 98195-0001; telephone 206/616-1471, e-mail: bschowen@u.washington.edu.