By Roland Piquepaille
Computer scientists from the University of California at San Diego (UCSD) have developed a wireless application for ubiquitous video dubbed 'RealityFlythrough.' By mixing images and video feeds from mobile cameras, the application dynamically creates a 3D virtual environment that remote viewers can explore. The software has already been tested by emergency response teams during the simulation of a terrorist attack. They had head-mounted wireless video cameras and GPS devices, and the control center was able to virtually explore the site of the disaster. This technology could also be used for virtual tourism or virtual shopping, but one of the researchers had a 'cool' idea, delivering a driving experience on the Web. Instead of looking at a set of instructions telling you to turn left or right, imagine if you could 'fly' the drive before doing it. Read more...The UCSD news release also says that remote users could watch a single view of a virtual environment instead of looking at a wall of monitors.
"Instead of watching all the feeds simultaneously on a bank of monitors, the viewer can navigate an integrated, interactive environment as if it were a video game," said UCSD computer science and engineering professor Bill Griswold, who is working on the project with Ph.D. candidate Neil McCurdy. "RealityFlythrough creates the illusion of complete live camera coverage in a physical space. It's a new form of situational awareness."
Here is how RealityFlythrough works.
The RealityFlythrough software automatically stitches the feeds together, by integrating the visual data with the camera's location and direction it is pointing. "Our system works in ubiquitous and dynamic environments, and the cameras themselves are moving and shifting," said McCurdy. "RealityFlythrough situates still photographs or live video in a three-dimensional environment, making the transition between two cameras while projecting the images onto the screen."
As an example, here are some snapshots of such a transition. "The transition uses two 'filler' images to provide additional contextual information. During this transition the viewpoint moves roughly 20 meters to the right of the starting image and rotates 135 degrees to the right. (Credit: University of California, San Diego). |
Of course, this system has some limitations, for example when there are not enough live video coverage, or where GPS cannot provide adequate location information. But the researchers said they almost solved these technical challenges.
The research work was presented on June 6, 2005, at the MobiSys 2005 conference held on June 6-8 in Seattle. And it has been published under the name "A Systems Architecture for Ubiquitous Video" (PDF format, 14 pages, 760 KB). The images shown above come from this document.
You'll find other references and videos on the RealityFlythrough website. But be warned: the sizes of the videos vary between 99 and 209 MB.
McCurdy, who expects to finish his Ph.D. in 2006, might start his own company this year to promote this technology for commercial markets.
Sources: Doug Ramsey, UCSD news release June 7, 2005; and various websites
Related stories can be found in the following categories.
Famous quotes containing the words delivers and/or video:
“Truth of a modest sort I can promise you, and also sincerity. That complete, praiseworthy sincerity which, while it delivers one into the hands of ones enemies, is as likely as not to embroil one with ones friends.”
—Joseph Conrad (18571924)
“I recently learned something quite interesting about video games. Many young people have developed incredible hand, eye, and brain coordination in playing these games. The air force believes these kids will be our outstanding pilots should they fly our jets.”
—Ronald Reagan (b. 1911)