Editorial

Vision for a New Year

Vision for a New Year

by Stephen P. Atwood

Happy New Year and welcome to 2017.  By the time you read this, many of us will be back from Seattle, where we assembled the technical program for this year’s Display Week Symposium to be held in Los Angeles, May 21–26.  I strongly suggest marking your calendar now and making your reservations soon.  This year is sure to be another “don’t-miss” event with many new developments to see and hear.  Thus far, the paper submission count is over 600, with a very high number focused on Virtual Reality (VR), Augmented Reality (AR), and holography/3D displays of various forms.  When we started covering this topic a few years ago in ID, I said that the innovations would start coming fast once certain foundational technology problems were overcome.  That prediction is looking like a safer bet every season.  Of course, tomorrow is not going to bring the holodeck or the real-time light-field projection TV to your living room, but I think we are on the verge of seeing credible commercial endeavors.  These include head-worn AR/VR technology and possibly a new concept that Intel terms “Merged Reality” (MR).

The definition of success might be fluid, with leading-edge applications such as gaming, social media, and entertainment novelties driving initial demand.  Surely, some hardware providers will be overly eager to push new things to market to satisfy investors.  But, unlike stereoscopic TV, I do not think this is going to flash and fade.  I think the potential to create or enhance so many applications, along with solving current limitations in our existing user-interface world, will combine with the rapidly growing pool of hardware and software components to produce an unstoppable wave.

An example of this is on our cover, which shows a typical user trying to find their way in downtown Manhattan – an experience I believe most of us can relate to.  Traditional navigation tools are good today, showing 2D maps and usually providing decent turn-by-turn directions.  However, it is easy to see how a true 3D rendering of the entire area, with building sizes shown to actual scale, would dramatically enhance the value and accessibility of the application.  We present this example thanks to the generosity of our friends at LEIA, Inc., a technology spinoff from HP Labs.  The display shown is one of their technology illustrations which we were inspired to use based on our interview with LEIA Founder and CEO David Fattal, which appears in this issue.  I think it is fair to predict that consumers would line up in large numbers to buy a smartphone with this feature in its display.  We could debate whether the most useful application would be 3D navigation or something else, but I am confident this display capability, especially if combined with some type of 3D gesture sensing, would be a major value to consumers.

Our issue theme this month is Applied Vision, and in that context we bring to you three features developed by our Guest Editor Martin (Marty) Banks, professor of optometry, vision science, psychology, and neuroscience at UC Berkeley.  In his Guest Editorial titled “Display Imagery vs. Real Imagery,” Martin talks about a “Turing Test” for 3D displays in which a user would be challenged to decide if they were viewing a real scene or one created by a display.  It is tempting to dismiss the likelihood of us ever being fooled in such a way, but for the sake of argument I choose to believe that this is indeed a possibility.

Consider today the computer-driven applications that might arguably pass the original Turing test.  Turing proposed that a human evaluator would be unable to determine the source of natural language conversations between a human and a machine that is designed to converse like a human – i.e., is it a human or machine on the other side of the conversation?  Turing did not even require that the computer render actual speech, but in fact there are several examples today of computers being able to conduct natural language conversations, including those capable of producing synthetic speech with a great deal of realism and some personality.

Similarly, computers can drive cars – in many cases better than humans.  In both cases, computers are designed to mimic human behavior (or improve on it) using the boundaries and conventions established by humans (like social conventions or highway rules).  Essentially, you can fool a human by mimicking a human.  So, with this context, we can see how fundamental it is for any true 3D display system to mimic the natural characteristics of human vision if there is a hope of achieving a Turing-like outcome.  As Martin succinctly states “…an understanding of human vision is proving to be crucial to the enterprise because in the end the goal is to provide the desired perceptual experience for a human viewer.”  Hence, the three outstanding articles that Martin has developed for us focus on this theme.  We are very grateful for his hard work, especially through the holidays, to provide an excellent ensemble for our ID readers.

The first is a Frontline Technology article by Michael J. Gourlay and Robert T. Held, both associated with a team at Microsoft that is developing technology for HoloLens, Hello, and Windows Holographic.  This article titled “Head-Mounted-Display Tracking for Augmented and Virtual Reality” provides a complete primer of the fundamental principles of head tracking as well as describing the challenges and best practices being developed today.  In order for a virtual world to appear real, the technology must be able to accurately respond to an observer’s natural head and body movements exactly as they would occur in the physical world.  Getting this right will be paramount to a seamless believable virtual experience.  This article provides a clear understanding of the fundamentals as well as the latest thinking from people who are clearly driving this research area.

The second Frontline Technology feature, “Visible Artifacts and Limitations in Stereoscopic 3D Displays,” written by Paul V. Johnson, Joohwan Kim, and Martin S. Banks, provides the most complete treatment of this subject we have published to date and will easily be an important reference article for the future.  It is especially interesting to read where the authors point out some fairly well-accepted but apparently incorrect beliefs of how observers merge the left-and right-eye images and the perceived resolution of 3D stereo images.  New ideas employing hybrid spatial, temporal, and color-based interlacing are explained and explored for their advantages over current methods as well – expertly taking onto account features and limitations of human vision to gain an edge over current methods.

The third Frontline Technology feature from author Johannes Burge, Assistant Professor at the University of Pennsylvania, is titled “Accurate Image-Based Estimates of Focus Error in the Human Eye and in a Smartphone Camera.”  Johannes reports on some excellent work characterizing the aspects of human vision that make focusing in the physical world so intuitive and apparently instantaneous.  Did you know, for example, that you probably refocus your eyes more than 100,000 times per day?  When you do, I doubt you experience any noticeable searching the way the scene from a digital camera might appear during focusing.  That is because the human eye has several important characteristics that help provide additional cues to aid adjustment of the lens – characteristics not currently utilized in auto-focus algorithms today.  I am sure you will find this article very interesting and educational.

Earlier I mentioned our cover and the technology from LEIA, Inc., being illustrated.  The company’s Founder and CEO David Fattal participated in a digital interview with Jenny Donelan for a Business of Displays feature to explain his company and technology, some creative applications, and his efforts to jumpstart the company to get its displays into the hands of customers.  It’s exciting partially in part because LEIA is working with existing cellphone and tablet LCDs with modifications to the backlight structure.  Fattal refers to this capability as a “diffractive light-field backlight (DLB).”  The result is a display that can be operated either in its original 2D mode or in a 3D light-field “holographic” mode, making its implementation into existing handheld devices seem relatively easy.

Our final Frontline Technology feature for this month is still somewhat vision related.  It is a story by author Trevor Vogt, Product Manager at Gamma Scientific, discussing the company’s latest advancements in “Quantifying Display Coating Appearance.”  Or, more specifically, measuring the optical performance of anti-reflective (AR) and similar coatings directly from the substrate without some of the problems such as second surface reflections usually associated with this type of measurement.  What I like about this article is both the innovation (and inherent simplicity) of the solution and the company’s willingness to discuss performance under real-world conditions at an actual coating manufacturer’s facility.  The article includes some good background both on AR-coating technology and on the current metrology methods generally employed as well.

Turning our attention now to the good works of our Society, we offer a special addition of SID News covering the latest bylaw changes affecting the governance structure of SID.  President-Elect Helge Seetzen, with some help from Jenny Donelan, outlines for us the reasons for the recent significant changes to the makeup of the SID Board of Directors and how this will help SID grow stronger in the years to come.  If you were not aware of these changes, and I suspect some of you may not be, please take the time to read this news.  It is a great thing that is happening and reflects the substantial vision and talents of our SID leadership team.

By now you must be thinking this is a big issue of ID magazine, and indeed it is.  I feel like we are starting the New Year off with a strong product and we could not do that without the incredible efforts of our Guest Editors and all our volunteer authors.  And so, once again I want to say thank you not only to the people who contributed to this issue but to everyone who gives us their time and effort to make Information Display come together each issue.  To everyone I wish much good health, success, and happiness in the New Year!  •