The best thing to give first responders before they enter a smoky room or the site of a chemical spill, or to soldiers before they enter a hostile bunker, is a picture of what's inside. Exploring an unsecured space in 3D from a safe distance could be a matter of life or death.

A team at the Defense Advanced Research Projects Agency (DARPA) is helping to make that possible by funding efforts to combine powerful 3D imaging software, GPUs and pretty much any camera to generate a VR view of a potentially dangerous environment.

The intention of the system, dubbed Virtual Eye, is to let soldiers, firefighters and search and rescue personnel walk around a room or other enclosed area - virtually - before entering, enabling them to scope out the situation while avoiding potential dangers.

'The question for us is, can we do more with the information we have?' says Trung Tran, the program manager leading Virtual Eye's development for DARPA's Microsystems technology office. 'Can we extract more information from the cameras we're using today?'

The answer is a resounding yes. Even more impressive: any camera will suffice. Tran says the system is 'camera agnostic.'

How Virtual Eye Works

Emergency responders who have determined that a room is too dangerous to enter without more information would insert drones or robots. These would wield or position two cameras in different parts of the room. The Virtual Eye software then fuses the separate images into a 3D virtual reality view in real time by extrapolating the data needed to fill in any blanks.

So, in a firefighting scenario, this could enable firefighters to look into a room, determine where a child in peril is located - perhaps behind a bed or other piece of furniture - see where flames are active, and plan their approach.

On the battlefield, soldiers could use Virtual Eye to detect if they might be ambushed, if an explosive is in a room or if a boobytrap awaits them.

'Understanding what we see is critical to making the right decisions in the battlefield,' says Tran. 'We can create a 3D image by looking at the differences between two images, understanding the differences and fusing them together.'

A Real-Time 3D Video Experience

The Virtual Eye system under development relies on NIVIDIA Tesla K20 GPU accelerators to stitch the images together, and to extrapolate 3D data from the images captured by the cameras. Tran says the K20 was chosen because it was small enough to fit into a laptop.

The system functions somewhat like the 3D technology used in sports broadcasts. That technology can show viewers a 360-degree view of a replay, but only as a still image. And it requires dozens of cameras positioned around a stadium or arena.

Tran says the Virtual Eye could end up enabling 3D broadcasting of sporting events in real time, with far fewer cameras.

And the technology figures to get better quickly. Currently, Virtual Eye can only fuse together images from two cameras. Tran's team is working on getting the software to coordinate additional cameras. He hopes to demo a version that fuses imagery from five cameras early next year.

Eventually, he foresees the technology being used to allow people to visit places they'd never see otherwise - such as the top of Mount Everest.

Nvidia Corporation published this content on 23 June 2016 and is solely responsible for the information contained herein.
Distributed by Public, unedited and unaltered, on 23 June 2016 16:56:05 UTC.

Original documenthttps://blogs.nvidia.com/blog/2016/06/23/darpa-virtual-eye/

Public permalinkhttp://www.publicnow.com/view/6717748F49B2BDA48BB0138719134A4A7F1B3577