Display Week 2017 Daily Reports

Display Week 2017 Daily Reports

Foveal rendering, high-resolution automobile lamps, flat-panel speakers – and friends – were only a few of the highlights that Information Display’s experts reported on from Display Week 2017 in Los Angeles.

by Information Display Staff

Every year, ID magazine’s ace reporters focus on specific areas of technology at Display Week, sharing their discoveries in daily blogs during the show. This is an invaluable service because, as we all know, one person can’t take in everything that Display Week has to offer. It’s best to divide and conquer.

We offer here a small sampling of our reporters’ blogs (including one from ID Executive Editor Stephen Atwood) to give you a taste of the show. If these excerpts inspire you to read more, please do so at www.informationdisplay.org.

In the next issue of the magazine, we will feature full-length articles from this year’s reporters – Achin Bhowmik, Karlheinz Blankenbach, Gary Feather, Tom Fiske, Steve Sechrist, and Ken Werner.

In the meantime, enjoy these sample blogs.

The Most Valuable Part of Display Week

By Tom Fiske

Display Week is about more than the biggest display, the highest contrast, or even the best technical paper. Among the most valuable parts of the week are the relationships – the new ones and the re-newed ones – as well as the opportunity to be involved, at many levels, in one of the most exciting technology fields around.

I’ve been attending SID’s conferences since the early ’90s, so I’ve been around long enough to legitimately reminisce about the good old days (yeah – I’m one of those guys). When I joined SID, the cathode ray tube was king (but nervously looking over its shoulder at ambitious usurpers); active addressing for super-twisted-nematic LCDs was going to preserve STN’s relevance against the rising tide of a-Si AMLCD technology; and belt-worn pagers with reflective twisted-nematic displays were among the most popular mobile communication choices – if you even needed that sort of thing.

The conference, then popularly known as SID and since rebranded as Display Week, has developed into the premier event for hearing about and seeing (and touching) new display technology. You can attend the symposium and exhibition (Fig. 1); get up to speed on related fields from world-class experts at the short courses and seminars; and learn about the latest trends at the business, investors, and market-focus conferences.

Fig. 1:  The show floor at Display Week always features many discoveries.

Every time I go to Display Week, I go with the expectation of learning about new technology and application and business trends. My engineer and researcher friends attend technical sessions and meet with suppliers and customers; the business folks go to make deals; and the marketers have the opportunity to pitch their latest wares and check out the competition. Of course, we all network. But in addition to the technology, business, and market-centric events, there are the educational opportunities and display industry support activities – all those “soft” activities that are difficult to quantify for the bottom line. The short courses, seminars, forums, standards meetings, and other events provide an invaluable service to the display community by making space for learning and building the infrastructure necessary for the future of the industry.

As a volunteer over the years, I’ve had the unique privilege to peer “under the hood” at some aspects of SID’s business and conference formation, as well as its display standards support. There are the usual eccentric personalities to contend with and struggles to get volunteers to follow through on commitments, but all in all, I’ve had a great time, learned a lot, and made some very valuable friendships. These are some of the most exceptional and talented people I have ever spent time with. I have worked with many (some at multiple companies), served booth duty with them, been their customers, written papers with them, and traded war stories with them. I have given and received job leads. I appreciate those relationships formed over time, even though I see my “SID friends” only once or twice a year. I remember fondly the ones who have passed on, and take comfort with those who remain.

So take this as a personal recommendation from me. If you didn’t attend Display Week this year, start planning your strategy to get here next year. Get involved with SID at the local level; volunteer; write and submit good papers; take advantage of the learning opportunities; and attend the seminars, the symposium, and the exhibition. Above all, make friends for a lifetime.

Pixels, Pixels, and More Pixels

By Achin Bhowmik

How many pixels are really needed for immersive visual experiences with a virtual reality (VR) head-mounted display (HMD)?” This was one of the most frequent questions I heard during and after the short course I taught at this year’s Display Week.

So I thought I would reflect on this a bit, and point to some recent developments and trends in the display industry as gleaned from the presentations and demonstrations at this year’s event.

First, let’s consider some basic, back-of-an-envelope math and calculations. Here are some facts related to the human visual system. An ideal human eye has an angular resolution of about 1/60th of a degree at the central vision. Each eye has a horizontal field-of-view (FOV) of ~160° and a vertical FOV of ~175°. The two eyes work together for stereoscopic depth perception over ~120° wide and ~135° high FOV.

Since the current manufacturing processes for both liquid-crystal displays (LCDs) and organic light-emitting diode displays (OLEDs) produce a uniform pixel density across the entire surface of the spatial light modulators, the numbers above yield a whopping ~100 megapixels for each eye and ~60 megapixels for stereo vision.

While this would provide perfect 20/20 visual acuity, packing such a high number of pixels into the small screens in a VR HMD is obviously not feasible with current technologies. To put this into context, the two displays in the HTC Vive HMD contain a total of 2.6 megapixels, resulting in quite visible pixilation artifacts. Most course participants raised their hands in response to a question about whether pixel densities in current VR HMDs were unacceptable (I suspect the rest agreed but were too lazy to raise their hands!).

Even if it were possible to make VR displays with 60 million to 100 million pixels, other system-level constraints would make it impractical. One is the large amount of graphical and computational resources necessary to create enough polygons to render the visual richness to match such high-pixel density on the screens. Next, the current bandwidth capabilities cannot support the transport of such enormous amounts of data among the computation engines, memory devices, and display screens, while at the same time meeting the stringent latency requirements for VR.

So … is this a dead end? The answer is a resounding “no!” Challenges such as these are what innovators and engineers live for! Let’s look at biology for some clues. How does the human visual system address this dilemma? It turns out that high visual acuity for humans is limited to a very small visual field – about +/- 1° around the optical axis of the eye, centered on the fovea. If we could track the user’s eye gaze in real time, we could render a high number of polygons in a small area around the viewing direction and drop the number exponentially as we move away from it. Graphics engineers have a term for such technologies already in exploration – “foveated” or “foveal” rendering. This would drastically reduce the graphics workload and associated power consumption problems.

Clearly, we are still in the early days of VR, with many challenges remaining to be solved, including presenting adequate visual acuity on displays. This is an exciting time for the display industry and its engineers, reminiscent of the days at the onset of revolutionary display discoveries for high-definition televisions (HDTVs) and smartphones.

How About a 40-Megapixel Smartphone?

By Stephen P. Atwood

In an excellent keynote address at Display Week, “Enabling Rich and Immersive Experiences in Virtual and Augmented Reality,” Google’s Clay Bavor discussed several aspects of his company’s strategy to develop immersive VR/AR applications and enabling hardware. Google has rather firmly focused its efforts on an architecture that utilizes commercially available smartphones. Though Bavor mentioned dedicated VR headset development and showed one image of a notional device, overall the company is concentrating on making smartphones work for VR applications.

There are challenges, including latency and resolution. Latency creates a discontinuous experience during head movement, and resolution limitations create effects similar to having poor vision in real life. Both of these challenges can be addressed, as Bavor explained, but what really got my attention was his announcement that Google and an unnamed partner have developed a smartphone display with 20-megapixel resolution per eye! That’s presumably at least a 40-megapixel total display, and it’s OLED-based as well. That’s the good news.

The bad news is that in order to supply content to that device at the required frame rates of 90 to 120Hz, the raw data stream approaches 100 gb/sec. Yikes! That’s not going to happen tomorrow, although high-performance video data compression is quickly becoming an option. This is described in the May/June issue of Information Display, which features the article “Create Higher Resolution Displays with the VESA DSC Standard.” Maybe that’s a path forward.

Google’s path forward is to discuss what it calls “foveal rendering,” which basically uses iris tracking to determine where your eye is focused, then renders a small region in the center of your vision at full resolution. The rest of the scene is rendered at lower resolution. Presumably, if you looked at the same space long enough, the rest of the periphery would also fill in at high resolution. Bavor also alluded to the need for an algorithm that could anticipate where your eye would be moving, akin to how a surfer anticipates a coming wave and gets up to speed as the crest arrives. Similarly, any algorithm performing this type of advanced processing would presumably need to anticipate what the observer is going to do; otherwise, the reaction time and subsequent latency would ruin the magic of the experience.

Whether this approach achieves commercial viability in the near future or not, it’s exciting to think about the various challenges that were overcome to make even a few prototypes at this resolution, and how this furthers the very high pixel-density capabilities already in place. It’s clearly an exciting target with a killer application. Time to start paddling – the next wave is coming.

I-Zone: Innovation in Light and Sound

By Ken Werner

The Innovation Zone (I-Zone) at SID Display Week 2017 featured approximately 50 exhibitors, more than double the average of years past. Among the genuine innovations on display at the I-Zone was the high-resolution, automobile headlamp, 30,000-pixel LCD shutter shown by the University of Stuttgart and automotive lighting company Hella. The light pattern of the headlamp can be controlled with great flexibility, and can be integrated with a car’s GPS and situational awareness systems.

Another intriguing innovation was presented by the gaming headphone maker Turtle Beach (in conjunction with Nepes Display). Its HyperSound transparent, flat-panel loudspeakers (Fig. 2) work on a different principle than the old NXT speakers, whose technology has been adapted by LG and Sony in their current high-end OLED TVs.

Fig. 2:  Turtle Beach (in conjunction with Nepes Display) was named an I-Zone honoree for its flat-panel speakers. Photo courtesy of Ken Werner.

Turtle Beach’s speakers are capacitive, and the vibrating layers are driven by two signals; the first is 100 kHz and the second is 100 kHz plus the audio sideband. The result, as explained by Turtle Beach’s rep, is that the audio portion of the signal is constructed in the space in front of the speakers. The effect is startling, with two speakers able to construct a surround-sound field of remarkable clarity and precise location of sounds. Another characteristic is that the sound field is highly directional and cannot be heard even 10 degrees or so off axis. Turtle Beach is currently field-testing the technology in kiosks, and is looking for manufacturing and application partners.

Digital Signage Is a Quiet Giant of the Display Industry

By Gary Feather

The digital signage (DS) sessions at Display Week opened with discussions of the market aspects of the technology, and followed with descriptions of how the displays are being implemented. The general consensus was that display innovations will grow the DS market over $1 billion annually in just two years.

The target markets now and in the future are overwhelming. Worldwide business and technology markets discussed at Display Week included:

•  Sports venues and public arenas
•  Las Vegas and gaming (sports books and entertainment)
•  Transportation (trains and planes)
•  Government (command, control, communications, and information)
•  Retail and digital out of home (DOOH)
•  Corporate and conferencing
•  Interactive and augmented reality entertainment performers in the AR environment with huge audiences
•  Cinema, to replace digital light processing (DLP) projection

Many in the industry don’t realize that DS display solutions account for billions of dollars in sales. Signage includes the special (and often artistic) implementation of LCD panels; LCD-tiled walls measuring hundreds of square feet; OLED panels with unique image quality; and LED-based, seamless tiled panels of any shape, size, and pixel pitch. Digital signage is currently exploiting all the developments of the display industry to give consumers unique and valued visualization solutions.  •