WSRI project focuses on image retrieval

Photo of Phani Kidambi, Ph.D., Wright State Research Institute engineer.

((Contributed by Timothy R. Gaffney))

A picture might be worth a thousand words, but a million pictures are worthless if you can’t find the one you need.

The growing flood of online digital images has outstripped the abilities of Internet search engines to find specific pictures, says Phani Kidambi, Ph.D., a Wright State Research Institute engineer.

A similar problem confronts intelligence analysts overwhelmed with imagery from ever more surveillance sensors, says Kidambi, who studied the problem for his Wright State University doctoral dissertation under the guidance of S. Narayanan, dean of the College of Engineering and Computer Science and executive director of the Research Institute.

Search engines depend on two basic schemes to hunt for pictures, Kidambi says. The most popular Internet search engines use annotation-based image retrieval, which means they actually search keywords associated with the images. The other method is content-based image retrieval, which searches the image itself for features such as color, shape or texture.

Each method has shortcomings. A keyword search may not find the image you want if the keywords on the image don’t describe it well – and many images on the Internet have no keywords at all. “They are lost” to annotation-based search engines, Kidambi says.

Content-based image retrieval is even more problematic because computers aren’t yet good at recognizing objects in an image – telling the difference between a blue sea and the sky, for example.

Kidambi and Narayanan developed a hybrid system. It ultimately searches by keywords. “The search is very similar to the current Google image search,” Kidambi says.

But it prepares the database first by organizing existing keywords and generating new ones based on the image content, to create comprehensive annotations that are added to the images. Kidambi says the result is a better list of keywords for each image – and some keywords for images that had none before.

Tests by independent evaluators got markedly better results with this method, Kidambi says.

Kidambi believes the new approach could have important applications – helping image analysts search growing streams of surveillance imagery, for instance, or helping medical doctors spot subtle, early signs of disease in medical images.

Kidambi’s research project was modest in scope. He created a test database of 10,000 images on one general subject – cricket, a popular sport in his native India.

It also used a content-based search algorithm they developed that searches for visual cues in an image by comparing its features with the algorithm’s own database of known objects. In this case, the database was limited to cricket, but he says the modular nature of his would allow users to plug in different algorithms for different databases, or add algorithms and expand the database.

A key feature of the research project was human input. Kidambi used a group of subject matter experts – fellow cricket fans – to review, correct and add to the automatically generated keyword templates.

The research project tested the ability of humans to come up with good keywords both with and without the automatically generated templates of Kidambi’s system. The results showed the templates helped humans do a better job of annotating images for later retrieval.

But Kidambi said future research could further automate the process. “The human role will slowly be replaced by machine learning algorithms, where the human trains the machine learning algorithms on how he annotates a particular image,” he says.

In the meantime, he says, he’s seeking a patent for the system.

The Wright State Research Institute is a university organization that taps Wright States capabilities, researchers, scientists and staff to solve difficult problems facing its industry and government partners.

Comments are closed.