Cheap Terahertz Imaging
Terahertz imaging, well-known form airport security checks, could be applied from explosive detection to collision avoidance systems in motorvehicles. Just like radar or sonar, THz imaging produces images by comparing measurements across an array of sensors.
The latest IEEE Transactions on Antennas and Propagation issue says researches from MIT’s Research Laboratory for Electronics tell us about a new technology which helps minimize the number of sensors needed tor THz imaging by 10 or even 100 making them more effective. This technology can be applied for developing of new type of radar and sonar systems.
In a photo camera the incoming light coming from a small patch of the visual scene and focused by lenses, strikes a small patch of the sensor array correspondingly. In low-frequency imaging systems an incoming wave – either electromagnetic, sonar or acoustic – strikes all of the sensors in the array.
It detects the wave's origin and intensity by comparing it's phase when it reaches each of the sensors. As the distance between sensors is less than a half of the wavelength the calculation is simply a matter of inverting the sensors’ measurements. In case the distance between sensors is more than half the wavelength, the inversion will result in more than one possible solution. These solutions will be put at regular angles around the sensor array. This phenomenon is called “spatial aliasing”.
In the majority of applications of low-frequency imaging, any given circumference around the detector is sparsely populated. This is the phenomenon used by the new apparatus.
Think about a range around you, like five feet. There’s actually not that much at five or ten feet around you. Different parts of the scene are occupied at those different ranges, but at any given range, it’s pretty sparse. Roughly, the theory looks like this: If 10 percent of the scene is occupied with objects, then you need only 10 percent of the full array to still be able to reach full resolution.
The trick is which 10 percent of the array to keep. Keeping every tenth sensor will not do: The regularity of the distances between sensors leads to aliasing. Varying the distances between the sensors will solve the problem, but it will also make inverting the sensors’ measurements and calculating the wave’s source and intensity will be prohibitively complicated.
James Krieger, a former student of Wornell’s, and Yuval Kochman, a former postdoc who is now an assistant professor at the Hebrew University of Jerusalem — propose a detector along which the sensors are distributed in pairs. The regular spacing between pairs of sensors ensures the scene reconstruction can be calculated efficiently, but the distance from each sensor to the next remains irregular.
An algorithm that determines the optimal pattern for the sensors’ distribution was developed. In essence, the algorithm maximizes the number of different distances between random pairs of sensors.
Krieger with his new colleagues at Lincoln Lab, has performed experiments at radar frequencies using a one-dimensional array of sensors deployed in a parking lot, which verified the predictions of the theory. Moreover, Wornell’s description of the sparsity assumptions of the theory — 10 percent occupation at a given distance means one-tenth the sensors — applies to one-dimensional arrays. Submarines’ sonar systems — instead use two-dimensional arrays, and in that case, the savings compound: One-tenth the sensors in each of two dimensions translates to one-hundredth the sensors in the complete array.James Preisig, a researcher at the Woods Hole Oceanographic Institution and principal at JP Analytics, says that he’s most interested in the new technique’s ability to reduce the computational burden of high-resolution sonar imaging.
“This technique helps significantly with the computational complexity of using signals from very large arrays,” Preisig says. “I can imagine it being deployed in situations where you are using very, very large arrays to get good spatial resolution, but you’re processing signals on a vehicle or something where you did not have significant computational power.”
In those contexts, Preisig says, the new technique’s sparsity assumptions make perfect sense. “In this context, the field of view is divided into sectors,” Preisig says. “The field has to be sector-sparse — only a subset of sectors have objects in them. That is realistic.”[display_rich_snippet_nk]