Despite the recent popularity of Zero-shot Learning (ZSL) techniques, existing approaches rely on ontological engineering with heavy annotations to supervise the transferable attribute model that can go across seen and unseen classes. Moreover, existing cross-sourcing, expert-based, or data-driven attribute annotations (e.g. Word Embeddings) cannot guarantee sufficient description to the visual features, which leads to significant performance degradation. In order to circumvent the expensive attribute annotations while retaining the reliability, we propose a Fuzzy Interpolative Reasoning (FIR) algorithm that can discover inter-class associations from light-weight Simile annotations based on visual similarities between classes. The inferred representation can better bridge the visual-semantic gap and manifest state-of-the-art experimental results.
|Number of pages||12|
|Publication status||Published - 3 Sep 2018|
|Event||BMVC 2018 - British Machine Vision Conference - |
Duration: 3 Sep 2018 → 6 Sep 2018
|Conference||BMVC 2018 - British Machine Vision Conference|
|Period||3/09/18 → 6/09/18|