Detect2Interact: Localizing Object Key Field in Visual Question Answering (VQA) with LLMs

Research output: Contribution to journalArticlepeer-review


Localization plays a crucial role in enhancing the practicality and precision of VQA systems. By enabling fine-grained identification and interaction with specific parts of an object, it significantly improves the system’s ability to provide contextually relevant and spatially accurate responses, crucial for applications in dynamic environments like robotics and augmented reality. However, traditional systems face challenges in accurately mapping objects within images to generate nuanced and spatially aware responses. In this work, we introduce “Detect2Interact”, which addresses these challenges by introducing an advanced approach for fine-grained object visual key field detection. First, we use the segment anything model (SAM) to generate detailed spatial maps of objects in images. Next, we use Vision Studio to extract semantic object descriptions. Third, we employ GPT-4’s common sense knowledge, bridging the gap between an object’s semantics and its spatial map. As a result, Detect2Interact achieves consistent qualitative results on object key field detection across extensive test cases and outperforms the existing VQA system with object detection by providing a more reasonable and finer visual representation.
Original languageEnglish
Pages (from-to)1-11
Number of pages11
JournalIEEE Intelligent Systems
Early online date3 Apr 2024
Publication statusE-pub ahead of print - 3 Apr 2024

Cite this