Beginning to See the Light (Field)

Beginning to See the Light (Field)

by Stephen P. Atwood

Our attention this month is drawn to light-field technology and the promise of a high-resolution, true-3D display experience that includes occlusion and perspectives – just like in real life! Often dreamed about and frequently represented in science fiction movies, the real thing has proven elusive all these years, due to countless architectural problems that are just now beginning to yield themselves to relentless engineering efforts.

Guest editor Nikhil Balram is back this year to bring us two more excellent features on some practical advances in the field. With these advances, we seem to be marching ever closer to commercial viability. You should start by reading his guest editorial to learn his perspective on the current state of the field. And I just want to say thank you to Nikhil for all his effort and support while building this issue.

The Holy Grail, as they say, is a full-360⁰ viewable display that can render any scene just as it would appear in real life. In our first Frontline Technology feature, author Thomas Burnett describes the various challenges facing light-field developers, including the total data volume that would be necessary to create this type of display. To overcome some of these challenges, he proposes a new graphics pipeline and a distributed-processing architecture that just might unsnag some of the bottlenecks with traditional approaches. Read the details in “Light-Field Displays and Extreme Multiview Rendering.” Thomas analyzes this challenge from a number of different angles. Even with some very clever methods, the rendering times and total data requirements might seem onerous, but we’re in a time when computing power is still growing exponentially and the cost of processing continues to fall, so the gap is closing.

Consider, for example, his proposal for arrays of multiview processing units (MvPUs) that can reduce the proposed approaches to silicon and can operate independent of any upstream information management hardware. Dedicated MvPUs, currently under development, can then directly drive assigned sections of the light-field optical array, and a massively parallel light-field image rendering system can be realized. This system would then run with an object-oriented graphics API Object GL language, also under development. This is promising work and well worth the effort and optimism invested in it.

While many people are working on the options for multiview displays, others are focused on single-view AR/VR headsets that also produce a lifelike real-world visual experience. But once again, the challenge of total data volume and bandwidth presents itself as a seemingly unyielding constraint. However, one of the advantages of a headset display is that there is only one observer and you can take special liberties with the presentation if you know exactly where that observer’s eye gaze is focused. This is because, as authors Behnam Bastani et al. explain, “Primary image capture receptors in the retina are concentrated in a narrow central region called the fovea. The image that is acquired at each instant produces high information content only in the foveal region.” In other words, only a very narrow region of your eye, and hence a very narrow field of view, can see high definition. Because of this unique characteristic, the authors propose a “Foveated Pipeline for AR/VR Head-Mounted Displays.”

You might think of it as a round portal in the center of the image that is rendered at the highest possible resolution while the rest of the surrounding area can be rendered at much lower resolution without impacting the observer’s perception of the image. It sounded to me like a bit of a trick at first, but after reading this article and taking in the associated excitement around the concept at Display Week this year, I’m convinced this is a very promising approach. It’s somewhat analogous to the early days of color TV, when we took advantage of the observer’s lower sensitivity to resolution in certain colors to achieve the lowest possible total channel requirements. Behnam and his coauthors (including Nikhil) take this concept all the way through a practical system architecture and full image-processing and data-management design to show how this can be implemented with existing technology, including silicon. This is very significant work, and actual images with great promise have already been demonstrated.

VR and AR Developments

Staying with a similar theme, we offer the next installment in our review coverage of Display Week 2017, “Recent Developments in Virtual-Reality and Augmented-Reality Technologies.” Frequent contributor Achintya Bhowmik was at the hub of many of the new developments revealed this year as a teacher, developer, and observer. It’s been a busy year, with so many new concepts being shown, including a whole range of new augmented- and mixed-reality devices. In fact, even the vocabulary is getting complicated. For example, where is the boundary between virtual reality and augmented reality? Is it at the intersection of transparent optics that merge simulated scenes with real scenes or is it something else? Well, whatever it is, what really matters is the rapid pace at which we’re seeing creative solutions and new ideas, including new ways to address the classic accommodation-vergence problems. Achintya’s review covers several critical areas of this field, including the hardware innovations in the displays themselves and the methods for achieving acceptable visual acuity (including foveated rendering). It’s a great summary of the state of the field from one of its foremost experts.

One of the companies that is working hard to bring a new type of light-field display technology to market is Avegant. Led by cofounder and CTO Edward Tang, Avegant is designing a new generation of transparent headset displays that can render light-field images in a virtual space in front of the observer with technology the company calls “retinal imaging.” Of interest to us was not just Avegant’s innovation but how far the company has gotten and what it was like to develop this technology in a startup environment. Our own Jenny Donelan took the challenge and produced this month’s Business of Displays Q & A feature for your enjoyment. At one point during the conversation, Ed describes the experience: “Startups are a rollercoaster. You have the extremes of emotion and experience at both ends – the highs and the lows – and those happen almost simultaneously. There are things going on that are so exciting that you can’t believe it. At the same time, the things that you worry about are incredibly stressful as well.”

Having been in several startup environments myself over the years, I could not have said it better. A cool idea and a new product are exciting, but getting the entire business over the goal line and into the marketplace is a far greater and more stressful challenge than developing the technology alone. I tip my hat to Ed and everyone else in our great industry who risk so much for their entrepreneurial achievements.

Market Realities for Stretchable Technology

Another area that has seen a lot of activity this year is stretchable displays and electronics. We’ve covered some interesting highlights recently in ID, and there were some very notable demonstrations at Display Week this year as well. However, there has been a lot more to talk about than we could cover ourselves, so we asked our friends Khasha Ghaffarzadeh and James Hayward from IDTechEx to write this month’s Display Marketplace feature, “Stretchable and Conformable Electronics: Heading Toward Market Reality.” As the title implies, the advances we see are just the tip of the iceberg, with all the underlying innovations lining up to bring some really interesting concepts to market. And while a lot of this work does produce visual items like clothing that can change color and displays that can be applied in countless ways, there is also a whole field of devices that can be integrated into fabrics to monitor health and wellness, and to protect users from dangers in the environment. Imagine, for example, clothing that can detect heat or dangerous vapors before the wearers are actually exposed. But this is not yet a mature space by any measure and the supply chain, market needs, and other business aspects are far from being well defined yet. With their great efforts, where we are now and where we will need to go is the picture the IDTechEx authors want you to appreciate.

As we move into the end of 2017, I want to wish everyone safe and happy holidays. I also want to acknowledge the people who may be struggling with loss due to recent events and extend my heartfelt sympathies to you. May you find peace, comfort, and security during this season. Cheers and best wishes!  •