Researchers at UC Santa Barbara Prof. Yasamin Mostofi’s lab have given the 1st demonstration of 3D imaging of objects through walls using ordinary wireless signal. The technique, which involves 2 drones working in tandem, could have a variety of applications, such as search-and-rescue, archaeological discovery and structural monitoring. “Our proposed approach has enabled unmanned aerial vehicles to image details through walls in 3D with only WiFi signals,” said Mostofi, a professor of electrical and computer engineering at UCSB. “This approach utilizes only WiFi RSSI measurements, does not require any prior measurements in the area of interest and does not need objects to move to be imaged.”
The proposed methodology and experimental results appeared in the Association for Computing Machinery/Institute of Electrical and Electronics Engineers International Conference on Information Processing in Sensor Networks (IPSN) in April, 2017. In their experiment, 2 autonomous octocopters take off and fly outside an enclosed, 4-sided brick house whose interior is unknown to the drones. While in flight, one copter continuously transmits a WiFi signal, the received power of which is measured by the other copter for the purpose of 3D imaging. After traversing a few proposed routes, the copters utilize the imaging methodology developed by the researchers to reveal the area behind the walls and generate 3D high-resolution images of the objects inside. The 3D image closely matches the actual area.
“However, enabling 3D through-wall imaging of real areas is considerably more challenging due to the considerable increase in the number of unknowns,” said Mostofi. While their previous 2D method utilized ground-based robots working in tandem, the success of the 3D experiments is due to the copters’ ability to approach the area from several angles, as well as to the new proposed methodology developed by her lab.
The researchers’ approach to enabling 3D through-wall imaging utilizes 4 tightly integrated key components. First, they proposed robotic paths that can capture the spatial variations in all the 3 dimensions as much as possible, while maintaining the efficiency of the operation. Second, they modeled the 3D unknown area of interest as a Markov Random Field to capture the spatial dependencies, and utilized a graph-based belief propagation approach to update the imaging decision of each voxel (the smallest unit of a 3D image) based on the decisions of the neighboring voxels. Third, in order to approximate the interaction of the transmitted wave with the area of interest, they used a linear wave model.
Finally, they took advantage of the compressibility of the information content to image the area with a very small number of WiFi measurements (<4%). It is noteworthy that their setup consists solely of off-the-shelf units such as copters, WiFi transceivers and Tango tablets.
http://www.news.ucsb.edu/2017/018068/x-ray-eyes-sky
Video: https://www.youtube.com/watch?v=THu3ZvAHI9A
Recent Comments