{"id":4934,"date":"2011-05-18T09:13:47","date_gmt":"2011-05-18T13:13:47","guid":{"rendered":"http:\/\/webapp2.wright.edu\/web1\/newsroom\/?p=4934"},"modified":"2011-05-19T08:52:42","modified_gmt":"2011-05-19T12:52:42","slug":"wsri-project-focuses-on-image-retrieval","status":"publish","type":"post","link":"https:\/\/webapp2.wright.edu\/web1\/newsroom\/2011\/05\/18\/wsri-project-focuses-on-image-retrieval\/","title":{"rendered":"WSRI project focuses on image retrieval"},"content":{"rendered":"<div id=\"attachment_4936\" style=\"width: 236px\" class=\"wp-caption alignright\"><a rel=\"attachment wp-att-4936\" href=\"http:\/\/webapp2.wright.edu\/web1\/newsroom\/2011\/05\/18\/wsri-project-focuses-on-image-retrieval\/ck-41936-diane-hamilton-graduate-deans-student-advisory-board-10-15-09\/\"><img loading=\"lazy\" decoding=\"async\" aria-describedby=\"caption-attachment-4936\" class=\"size-medium wp-image-4936\" src=\"http:\/\/webapp2.wright.edu\/web1\/newsroom\/files\/2011\/05\/Phani-Kidambi-portrait-226x300.jpg\" alt=\"\" width=\"226\" height=\"300\" \/><\/a><p id=\"caption-attachment-4936\" class=\"wp-caption-text\">Photo of Phani Kidambi, Ph.D., Wright State Research Institute engineer.<\/p><\/div>\n<p>((Contributed by Timothy R. Gaffney))<\/p>\n<p>A picture might be worth a thousand words, but a million pictures are worthless if you can\u2019t find the one you need.<\/p>\n<p>The growing flood of online digital images has outstripped the abilities of Internet search engines to find specific pictures, says Phani Kidambi, Ph.D., a Wright State Research Institute engineer.<\/p>\n<p>A similar problem confronts intelligence analysts overwhelmed with imagery from ever more surveillance sensors, says Kidambi, who studied the problem for his Wright State University doctoral dissertation under the guidance of S. Narayanan, dean of the College of Engineering and Computer Science and executive director of the Research Institute.<\/p>\n<p>Search engines depend on two basic schemes to hunt for pictures, Kidambi says. The most popular Internet search engines use annotation-based image retrieval, which means they actually search keywords associated with the images. The other method is content-based image retrieval, which searches the image itself for features such as color, shape or texture.<\/p>\n<p>Each method has shortcomings. A keyword search may not find the image you want if the keywords on the image don\u2019t describe it well \u2013 and many images on the Internet have no keywords at all. \u201cThey are lost\u201d to annotation-based search engines, Kidambi says.<\/p>\n<p>Content-based image retrieval is even more problematic because computers aren\u2019t yet good at recognizing objects in an image \u2013 telling the difference between a blue sea and the sky, for example.<\/p>\n<p>Kidambi and Narayanan developed a hybrid system. It ultimately searches by keywords. \u201cThe search is very similar to the current Google image search,\u201d Kidambi says.<\/p>\n<p>But it prepares the database first by organizing existing keywords and generating new ones based on the image content, to create comprehensive annotations that are added to the images. Kidambi says the result is a better list of keywords for each image \u2013 and some keywords for images that had none before.<\/p>\n<p>Tests by independent evaluators got markedly better results with this method, Kidambi says.<\/p>\n<p>Kidambi believes the new approach could have important applications \u2013 helping image analysts search growing streams of surveillance imagery, for instance, or helping medical doctors spot subtle, early signs of disease in medical images.<\/p>\n<p>Kidambi\u2019s research project was modest in scope. He created a test database of 10,000 images on one general subject \u2013 cricket, a popular sport in his native India.<\/p>\n<p>It also used a content-based search algorithm they developed that searches for visual cues in an image by comparing its features with the algorithm\u2019s own database of known objects. In this case, the database was limited to cricket, but he says the modular nature of his would allow users to plug in different algorithms for different databases, or add algorithms and expand the database.<\/p>\n<p>A key feature of the research project was human input. Kidambi used a group of subject matter experts \u2013 fellow cricket fans \u2013 to review, correct and add to the automatically generated keyword templates.<\/p>\n<p>The research project tested the ability of humans to come up with good keywords both with and without the automatically generated templates of Kidambi\u2019s system. The results showed the templates helped humans do a better job of annotating images for later retrieval.<\/p>\n<p>But Kidambi said future research could further automate the process. \u201cThe human role will slowly be replaced by machine learning algorithms, where the human trains the machine learning algorithms on how he annotates a particular image,\u201d he says.<\/p>\n<p>In the meantime, he says, he\u2019s seeking a patent for the system.<\/p>\n<p>The Wright State Research Institute is a university organization that taps Wright States capabilities, researchers, scientists and staff to solve difficult problems facing its industry and government partners.<\/p>\n","protected":false},"excerpt":{"rendered":"<p>A picture might be worth a thousand words, but a million pictures are worthless if you can\u2019t find the one you need. <a href=\"https:\/\/webapp2.wright.edu\/web1\/newsroom\/2011\/05\/18\/wsri-project-focuses-on-image-retrieval\/\" class=\"morelink\">Continue reading <span class=\"meta-nav\">&rarr;<\/span><\/a><\/p>\n","protected":false},"author":23,"featured_media":0,"comment_status":"closed","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[725,715,18,719],"tags":[],"class_list":["post-4934","post","type-post","status-publish","format-standard","hentry","category-home-news-sidebar","category-news","category-research","category-special-categories"],"_links":{"self":[{"href":"https:\/\/webapp2.wright.edu\/web1\/newsroom\/wp-json\/wp\/v2\/posts\/4934","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/webapp2.wright.edu\/web1\/newsroom\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/webapp2.wright.edu\/web1\/newsroom\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/webapp2.wright.edu\/web1\/newsroom\/wp-json\/wp\/v2\/users\/23"}],"replies":[{"embeddable":true,"href":"https:\/\/webapp2.wright.edu\/web1\/newsroom\/wp-json\/wp\/v2\/comments?post=4934"}],"version-history":[{"count":16,"href":"https:\/\/webapp2.wright.edu\/web1\/newsroom\/wp-json\/wp\/v2\/posts\/4934\/revisions"}],"predecessor-version":[{"id":4947,"href":"https:\/\/webapp2.wright.edu\/web1\/newsroom\/wp-json\/wp\/v2\/posts\/4934\/revisions\/4947"}],"wp:attachment":[{"href":"https:\/\/webapp2.wright.edu\/web1\/newsroom\/wp-json\/wp\/v2\/media?parent=4934"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/webapp2.wright.edu\/web1\/newsroom\/wp-json\/wp\/v2\/categories?post=4934"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/webapp2.wright.edu\/web1\/newsroom\/wp-json\/wp\/v2\/tags?post=4934"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}