Researchers of the Human Media Lab at Queen's University, Canada, have developed a holographic 3D video-conferencing system that would raise the eyebrow of even the staunchest sci-fi fan.
The TeleHuman, as the technology is known, works with two people standing in front of their own cylindrical display video-conferencing pods. Each person communicates to holographic life-size 3D projections of one another. This image is visible 360 degrees around the pod and creates the illusion of another person standing within the pod.
"Why Skype when you can talk to a life-size 3D holographic image of another person?" said Roel Vertegaal, director of the Human Media Lab, on the Queen's University website.
The technology works with a combination of six Microsoft Kinect sensors, a 3D projector, a 1.8 meter-tall translucent acrylic cylinder, and a convex mirror.
While this exciting technology appears one step closer to Star-Wars-like holographic communication, a few issues come to mind. The first is latency - short period of delay - in current Xbox Kinect sensors and the data processing and transfer from sensors to the screen. This will undoubtedly affect the quality of conversations.
The next question is of bandwidth issues, which arise if using TeleHuman pods over long distances via the internet from more than two locations. The researchers are already looking into this problem of multiparty video-conferencing by adopting a different communication layer. The researchers also discovered that there is a trade-off between improving the 3D experience by making participants wear shutter glasses and supporting eye-contact between them.
“Tools such as shutter glasses, which obstruct views of the remote participants eyes, are most likely not recommendable for bi-directional communication systems,” according to their research paper.
Another tool called a BodiPod has been developed and is geared towards the study of human anatomy through physical interactions and speech. For example, when a user walks towards the BodiPod, layers of skin peel away from a 3D human model revealing its organs and bone. Users can use voice commands such as “brain” to automatically zoom into the 3D model.
Both these devices are an exciting and creative endeavor. We look forward to seeing further developments.
More information on the TeleHuman and BodiPod can be found on the university website.