Information Display recently had the chance to talk to V. Michael Bove, Jr. about data processing and other logistical challenges designers face in order to render real-time holographic TV images. (One of the Frontline Technology stories in this issue, "Real-Time Dynamic Holographic 3-D Display," discusses a possible approach to creating such a TV.) Bove heads up the Object-Based Media Group at the MIT Media Lab. He is co-author with the late Stephen Benton of the book Holographic Imaging (Wiley, 2008) and served as co-chair of the 2012 International Symposium on Display Holography.
Compiled by Jenny Donelan
V. Michael Bove, Jr.
Q: From today's vantage point, what would it take to encode, stream, and make set-top boxes that could decode and render holographic TV images?
A: Something that people observed in the '60s, '70s, and '80s about holographic television was that's a lot of pixels. Even now, it's unrealistic to think of transmitting all the necessary pixels in a large, high-resolution, optically captured holographic image. Even if you could, there's a bigger problem, which is if you make a hologram, using coherent capture, you're making it for a particular size of display that has to have RGB light sources of given wavelengths. These must match the wavelengths of the lasers that you use to capture the scene, and so forth. So, it's not terribly flexible, even if you had the bandwidth for dealing with it and even if you had the high-powered pulsed lasers for scene capture.
People are not for the most part thinking about doing holographic capture and then sending the hologram onto a holographic display, but rather about capturing enough information about a scene in such a way that it could be turned into a hologram.
This is doable with a CGI model of a scene – you know everything about it – but for a real scene, you either need a range-finding camera, a lightfield camera, or an array of small ordinary cameras. Somehow you're going to take the information about the shapes of the objects or about the lightfield coming from the scene and you're going to convert that into data so you can make a hologram. One of the nice things about doing that is those representations from a lightfield camera or an array of parallax images are a bit easier to compress and transmit over networks than a hologram. The problem is you then need the computation in the display not just to decode the data but to generate the hologram, so you have to be able to generate the diffraction patterns in real time. If you're working with a 3-D model, then you have to do the ren-dering of the model before you can even generate the diffraction patterns.
I should mention that the work we've been doing at the Media Lab with the electronic holographic displays has been horizontal parallax only because that makes the problem a bit more computationally tractable. If you have full parallax, then you need millions of pixels per scan line and millions of scan lines.
And there are other things you can do to make things simpler. A company called SeeReal is doing an eye-tracking display for which they can make the views very narrow because they can steer the images to where your eyes are. Since a hologram's pixel pitch relates to the view angle, if you are going to make a hologram that has, say, one degree of viewing angle, you don't need so many pixels. That makes the computation a lot easier.
Q: With the understanding that the requirements vary according to how the hologram is created, compressed, and streamed, roughly how much processing power would be required to operate a holographic TV in somebody's living room someday?
A: In the early 1990s, my research group's collaboration with the late Stephen Benton's group involved building specialized computational hardware. We built a desktop super computer for doing just that. About 10 years later, I realized that the graphics-processing units in PCs or game consoles were becoming fast enough that you could potentially use them to generate holograms. In about 2003, we found that in fact you could do about as well with GPUs as with our specialized hardware. GPUs get faster every year and so that's been the direction we've taken since then.
Q: What would be the size of a holographic display in terms of processing power? Will we need the equivalent of a roomful of computers?
A: For horizontal parallax, you could probably build a holographic display for a living room with somewhere in the neighborhood of a dozen or two GPUs in it. For full parallax, even GPUs are not going to completely solve the problem. You're probably looking at hundreds of GPUs and that's not quite practical. Something to be aware of is that graphics processors are very, very fast, but they're also very power hungry. The higher-end GPU cards that come in fancy PCs are largely heatsinks and fans. We find we run into trouble keeping them cool and powered if we want to put just three or four of those in a small box. If you're talking about putting a lot more into a unit for a consumer, it would use a lot of electricity, and run hot, and make noise.
Q: Do you see any kind of holographic TV available in the next 5 years? Ten years?
A: The takeaway is that we are not, as long as we're talking about horizontal parallax only, orders of magnitude away from having enough computation to do that kind of thing. We are driving our holographic displays in the lab with a handful of GPUs, and they work.
However, one of the "gotchas" in holographic video displays is that unlike typical TVs, as you increase the size of the screen, the number of pixels has to grow, which leads to scaling problems. Eventually you run into a bottleneck somewhere, whether it's computation or interconnects or whatever. So it's probably going to be a while before we have holographic video displays the size of the biggest living-room TV that you can buy at Best Buy today. However, if we're doing desktop monitors or 26-in.-diagonal displays or something similar, it's not outrageously far in the future.