Skip to main content

News

UH Engineer Changes How We Interpret Geospatial Images
By
Laurie Fickman
Saurabh Prasad
Saurabh Prasad

Saurabh Prasad, Cullen College assistant professor of electrical and computer engineering, is reporting a breakthrough in image interpretation that could overcome hurdles that prohibit accurate interpretation of imagery data. His work, tackling the challenges of object rotation, is featured on the cover of the journal IEEE Transactions on Geoscience and Remote Sensing, which showcases his article “Morphologically decoupled structured sparsity for rotation-invariant hyperspectral image analysis.”

Hyperspectral imagery, which delivers intensely miniscule details over hundreds of wavelengths (colors) from hyperspectral cameras, presents interpretational challenges that Prasad is navigating around.

“You can’t simply use off-the-shelf techniques to analyze such images effectively.” A big part of his work is designing new algorithms to leverage the potential in such data.

The challenge begins when the algorithms interpreting data are being created. To create an algorithm-based program that recognizes objects in images, the program must be fed hundreds, if not thousands, of examples of the object to learn recognizable features. But the program is stymied if the object under review is oriented a different way than its cache of training library photos. The “nuisance factors,” as Prasad calls them, include varying illumination, sensor viewpoints, scales and orientation of objects in the images. Prasad’s Hyperspectral Image Analysis Laboratory focuses on machine learning and image analysis algorithms that are robust to these confounding conditions.

“In this paper we developed a method to specifically account for orientation variability. With this work, we make new inroads into the field of sparse representation-based image analysis, where optimal image analysis can be undertaken by exploiting the underlying sparsity in signal representations,” said Prasad. He can train the machine using any orientation of an object and apply the sparse representation based model to any other orientation. The method includes partitioning an image into its geometric components, which enables Prasad to design algorithms to ensure robustness to orientation changes.

Intense images need intense interpretation

Reviewing satellite (or aerial) images from hyperspectral cameras, scientists can tell whether a soccer field is covered with natural grass or Astroturf. The images are that precise and detailed.

“In a sense, every pixel has a chemical fingerprint,” said Prasad. Examples of this intense ability to peer into chemicals on the ground are the NASA images collected over ground zero after 9/11, in which the extent of the debris field was interpreted by remote sensing.

“That is the power of hyperspectral imaging,” said Prasad “Because of such images, they had an idea of how far the concrete and different kinds of dust, like gypsum and wallboard, had spread, something that would be very challenging with traditional color images.”

Color camera images provide information on three colors: red, green and blue. Hyperspectral cameras provide information on hundreds of thousands of colors and are not constrained by the visible part of the spectrum; they peer beyond the visible into the infrared portion of an image.

It’s so complicated it’s beyond human interpretation, especially with images spanning a wide geospatial scale.

“At the end of the day we want machines (algorithms)  to assist us in understanding such images,” said Prasad. “Humans are limited in capacity to interpret such large and complex data. It requires an algorithm-based approach.”

And so he wrote one.

Click here to read the full article in IEEE Transactions on Geoscience and Remote Sensing

Share This Story:

Related News Stories