Ubiquitous Displays at IDW '04
It would be difficult to cover even a sizable minority of the 458 papers delivered at the 2004 International Display Workshops held in Niigata, Japan, in December 2004, so we will focus on the issues raised by just three of them.
by Ken Werner
IMAGINE that you live in a world in which displays are ubiquitous. They are in stores, in airports, at bus stops, on the outside of buildings, in building lobbies, in kiosks and ATMs, in the windows of businesses, in hotel rooms, in automobiles and airplanes, and inside private homes. You say that you do live in such a world? Okay. Now imagine that all of these displays know who you are.
This was the jumping-off point for some of the challenging ideas considered in "Ubiquitous Displays for Ubiquitous Computing," Hideyuki Nakashima's featured invited address at the 2004 International Display Workshops, held December 8–10, 2004, at the Toki Messe in Niigata, Japan (Fig. 1). Nakashima is President of Future University in Hakodate, Hokkaido. As an example of the ubiquitous-display concept, let's say you are walking down an airport concourse. A display recognizes you and informs you of your flight and gate information, tells you how long you have until boarding time, and how long it will take you to reach your gate at your current walking speed.
Quite apart from the impressive network infrastructures such applications will require, there are interesting philosophical and hardware considerations. First, as Nakashima said with deceptive mildness, "ordinary conceptions of privacy need revision."
Second, how does the display recognize and communicate with you? Nakashima and his colleagues favor location-based communication. It is your physical proximity to the display that triggers communication and provides the context for the transfer of information. This is not the only possibility, Nakashima observed. Location-based services can be (and are) delivered over cellular networks with location information provided by GPS, but location-based communications makes less use of system resources and is more effective because GPS location determination is not always precise enough for this application.
Fig. 1: IDW '04 was held at the Niigata Convention Center (Toki Messe), which is in Niigata's port area on the Shinano River.
Among the location-determining technologies investigated by Nakashima and his former colleagues at the Cyber Assist Research Center of Japan's National Institute of Advanced Industrial Science and Technology (AIST) are radio-frequency ID (RFID) tags and an infrared beam that reflects from a target you wear or carry.
There are additional complications when you are not alone on that airport concourse. Let's say the display successfully identifies the individual members of a cluster of 20 persons with RFIDs. Who has priority? Who is the privileged person who receives the first communication from the display? Who doesn't receive any communication at all? By what rules are the decisions made?
Then, do you care that 19 others are looking at your flight information? Perhaps not. What about the current status of your stock portfolio as you walk past a broker's window? Are we ready to accept that kind of revision to our "ordinary conceptions of privacy"? Alternatively, Nakashima and his colleagues are thinking about the possibilities for private visual communication in public spaces. One possibility is a retinal scanning display that finds you and presents information to you alone as you walk by.
"Ubiquitous displays" are just one part of a ubiquitous computing initiative. As Nakashima said in his talk, "MIT's Oxygen project, Microsoft's Easy Living, and Hewlett-Packard's Cool Town, as well as our Cyber Assist project, are among those announcing their new directions: making IP machines invisible from human users and yet providing a rich, ubiquitous, supporting environment."
Measuring Motion Blur
For many years, the problem of motion blur in LCDs was attributed solely to the switching time of the displays' liquid-crystal cells. But as switching times became faster, a substantial portion of the blurring seen in moving images remained. As this century began, investigators started to understand that the sample-and-hold nature of LCD addressing provided a second source of motion blur. There was a clear need for a way of evaluating the degree of blur that viewers actually experience, taking into account that "motion blur on hold-type displays is caused by the mismatch between the movement of human eyeballs and the motion of images," as stated in an invited paper by J. Someya and Y. Igarashi (see below).
This was the subject of "A Review of MPRT Measurement Method for Evaluating Motion Blur of LCDs," a well-attended invited paper by Jun Someya (Mitsubishi Electric Corp.) and Youichi Igarashi (Hitachi Displays).
Moving-picture response time (MPRT) is the metric developed by several companies in 2001 to quantify the amount of blur that a viewer experiences while looking at a motion picture displayed on an LCD with the viewer's eyes in smooth pursuit of the moving object – a condition called smooth-pursuit eye-tracking (SPET) – which applies when the angular velocity of the image is less than a particular critical value. This was an important consideration because, if SPET applies – and testing showed that it does – measurements can be made with a generally used CCD camera.
The determination of MPRT starts with the measurement of blurred edge time (BET) on the display using a pursuit-camera system. An initial gray level and a final gray level, with a vertical boundary between them, are placed on the screen of the device being tested, and the pattern is moved to the right at an angular velocity in the approximate range of 5–10 per second (Fig. 2). Tests on human subjects indicate that the degree of perceived blur is not sensitive to changes in angular velocity over this range, Someya said. The camera-to-screen distance can be set with considerable freedom, Someya told ID following his talk, as long as the camera can track the moving edge accurately.
The pursuit camera pursues the boundary between the two levels to capture the data. The time elapsed between the relative luminance moving from 10% of the new level to 90% is the BET, which is adjusted to extended blurred edge time (EBET), in this case, by dividing BET by 0.8.
Fig. 2: A pursuit-camera system is used to measure the blurring of a moving image on a liquid-crystal display. [Figure is from J. Someya and Y. Igarashi, "A Review of MPRT Measurement Method for Evaluating Motion Blur of LCDs," IDW '04 Digest, 1571–1574 (2004).]
EBET is measured 42 times for a display. Seven equally spaced gray levels, including black and white, are determined for each display. A test pattern such as that shown in Fig. 2 is constructed for the transition from each of these levels to each of the remaining six levels, which provides 42 results. The MPRT is the average of the 42 EBET measurements.
All of this would not mean much if the measured MPRT did not correlate very well with subjective assessments of blur experienced by viewers. Tests, said Someya, have produced an excellent correlation of 0.935. Someya concluded that MPRT is an objective way to measure the motion blur on LCDs that correlates with what viewers experience. It is therefore a valuable tool with which developers can measure the results of design changes in their displays, and it provides a specification that consumers (and possibly marketers) will probably find meaningful.
A VESA committee is currently working on the standardization of MPRT, and Someya anticipates that a preliminary edition of VESA's new flat-panel-display measurements standard (FPDM2) containing MPRT will be published this year. Commercial test sets for measuring MPRT already exist, and one was on display in the IDW exhibition (see the exhibits section later in this article).
During the question and answer period, Brian Berkeley (VP, LCD Business, Samsung Electronics) commented that although MPRT had been specifically designed to assess motion blur on LCDs, it is being used by some PDP makers to compare their technology with LCDs. But MPRT does not predict things such as the contour artifacts that appear on PDPs, he said. "What can be done," asked Berkeley, "to allow MPRT to be used to compare different display technologies?" Someya said that "MPRT was developed for LCDs. Another metric would be needed for PDPs and to compare technologies."
Active-Matrix Driving for EPD
At a special topical session on electronic paper, Mark Johnson (Philips Research Labs) presented never-before-released information on the active-matrix driving scheme for the electrophoretic display used in the Sony LIBRIé eBook now sold in Japan. The paper, "Driving an Active-Matrix Electrophoretic Display," was written by a sizable team from Philips and E-Ink Corp. E-Ink Corp. makes the electrophoretic front plane for the display and Philips makes the display's active-matrix backplane and designed the driving scheme that produces two levels of gray plus black and white.
Fig. 3: Kiyoto Fujioka's collection of old CRTs and television sets was a popular attraction at the IDW '04 exhibition.
Fig. 4: This operating RCA CTC-5, a 21-in. color TV set (ca. 1956), seen here with Fujiokasan himself, was part of the Fujioka collection.
The E-Ink Corp. front plane consists of microcapsules containing a liquid and two kinds of tiny pigment particles, black ones that are negatively charged and white ones that are positively charged. The microcapsules are laid down as a layer when the display is made. If a positive voltage is applied to a bottom electrode under the microcapsules (relative to the transparent top electrode), the positive particles will migrate to the top and produce a white image, while the black particles will migrate to the bottom. Applying +15 V for 300 msec will drive a complete black-to-white transition.
Philips has developed a way of also driving the display to intermediate states in which the black and white particles are partially intermingled in the body of the capsule, thus producing two levels of gray. This could have been done, Johnson said, by applying intermediate voltages for a fixed time or applying the original voltage for a shorter time. For the sake of driving simplicity, Philips uses a constant voltage and varies the width of the pulses.
Each of the display's states is stable, allowing images to remain on the display when power is off, so an important part of the driving scheme is a look-up table through which the driving circuitry can determine the voltage duration needed to drive each display pixel from its existing level to the new one. This process is precise when driving to full black or full white, but it is sufficiently imprecise – with a difference of more than two CIELAB L* lightness units – that when driving to one of the gray levels "ghosts" of the previous image can appear within the new image. To avoid ghosting, the transitions from any initial level to the same target level have to be within one L* unit.
A brute-force solution is to drive the pixel first to black, then to white, and then to the desired gray level, but this takes three cycles instead of one, and the display visibly "flashes" during the process. To obtain accuracy without a total reset, Philips drives the pixel to the closest rail – the closest of the two extreme optical states, either black or white – and then drives to the desired gray level. "Closest rail" means the rail closest to the new desired state, not the current one. A closest-rail reset is faster than a total reset and minimizes the possibility of optical flicker. For greater accuracy, independent of the original optical state, Philips extends the time of the reset pulse. The rail-stabilized driving scheme is used in the 6-in. 167-ppi display integrated into Sony's LIBRIé eBook.
Fig. 5: Arisawa's "prism screen" (left part of the image) absorbed ambient light originating at a high angle, resulting in increased contrast.
Fig. 6: Arisawa's screen material uses a straightforward geometrical effect that is highly directional. By rotating these pieces of the material by 180°, the white pieces become black and the black pieces become white.
During the question and answer session, an audience member asked how it was possible to keep the intermediate states of intermingled positive and negative particles stable. Johnson answered that he had been surprised by that himself and did not know how E-Ink accomplished it, but that he could attest to the fact that the gray states are just as stable as the black and white ones.
The exhibits at IDW are generally an informal, table-top affair, but there were about 75 exhibitors this time, a record. A popular stop was a display of old CRTs and television sets from the personal collection of Kiyoto Fujioka, some of which are shown here (Fig. 3). Of particular interest was an operating RCA CTC-5, a color-TV set (ca. 1956) with a 21-in. metal-cone CRT (Fig. 4).
Otsuka Electronics was demonstrating its Photal MPRT-1000 pursuit-CCD camera system for measuring motion artifacts in LCDs. This is the system that is being used in the creation of VESA's motion-artifact standard. The working distance for the system as shown at IDW '04 is 200–500 mm, but Otsuka representatives had told Mitsubishi's Jun Someya that other distances should be possible with a change of lenses.
At SID 2004 in May, Sony created a stir with a multilayered front-projection screen that used destructive interference to absorb ambient illumination while efficiently reflecting light from a projector. The result was a screen that appeared dark in ambient light but reflected a bright white for projected images.
At IDW '04, Arisawa Manufacturing Company showed a "prism screen" designed for super-short focal-length projectors like the NEC WT-600. The "shelves" of the prism are oriented horizontally, with the top surfaces of the prism colored black. Ambient light originating at a high angle is absorbed by these black surfaces. Projected light coming in from a low angle strikes the lower, reflective face of the prism and is reflected efficiently (Fig. 5). The effect is somewhat like that of the Sony screen, but it is highly directional (Fig. 6). Arisawa's Yoshikazu Umezawa knew of the Sony screen and expressed the opinion that his company's product might be more cost-effective.
There was much more to see and hear at IDW '04 than can possible be presented here, and attendance surpassed 1400 despite the effects of the earthquake that disrupted Shinkansen service from Tokyo to Niigata and generated substantial uncertainty. The high attendance, said SID President Shigeo Mikoshiba at the conference banquet, is evidence of the continuing growth and vitality in display research and in the display industry as a whole.
IDW is sponsored by the Japan Chapter of the Society for Information Display (SID) and The Institute of Image Information and Television Engineers (ITE). IDW '05 will be combined with Asia Display and held in Takamatsu, Japan, December 6–9, 2005. •