Abstract
Localization plays a crucial role in enhancing the practicality and precision of VQA systems. By enabling fine-grained identification and interaction with specific parts of an object, it significantly improves the system’s ability to provide contextually relevant and spatially accurate responses, crucial for applications in dynamic environments like robotics and augmented reality. However, traditional systems face challenges in accurately mapping objects within images to generate nuanced and spatially aware responses. In this work, we introduce “Detect2Interact”, which addresses these challenges by introducing an advanced approach for fine-grained object visual key field detection. First, we use the segment anything model (SAM) to generate detailed spatial maps of objects in images. Next, we use Vision Studio to extract semantic object descriptions. Third, we employ GPT-4’s common sense knowledge, bridging the gap between an object’s semantics and its spatial map. As a result, Detect2Interact achieves consistent qualitative results on object key field detection across extensive test cases and outperforms the existing VQA system with object detection by providing a more reasonable and finer visual representation.
Original language | English |
---|---|
Pages (from-to) | 35-44 |
Number of pages | 10 |
Journal | IEEE Intelligent Systems |
Volume | 39 |
Issue number | 3 |
Early online date | 3 Apr 2024 |
DOIs | |
Publication status | Published - 1 May 2024 |
Keywords
- Visualization
- Semantics
- Object detection
- Image segmentation
- Task analysis
- Computational modeling
- Chatbots