TY - JOUR
T1 - CamNav
T2 - a computer-vision indoor navigation system
AU - Karkar, Abdel Ghani
AU - Al-Maadeed, Somaya
AU - Kunhoth, Jayakanth
AU - Bouridane, Ahmed
N1 - Funding Information:
This publication was supported by Qatar University Collaborative High Impact Grant QUHI-CENG-18/19-1. The findings achieved herein are solely the responsibility of the authors. The contents of this publication are solely the responsibility of the authors and do not necessarily represent the official views of the Qatar University.
PY - 2021/7/1
Y1 - 2021/7/1
N2 - We present CamNav, a vision-based navigation system that provides users with indoor navigation services. CamNav captures images in real time while the user is walking to recognize their current location. It does not require any installation of indoor localization devices. In this paper, we describe the techniques of our system that improve the recognition accuracy of an existing system that uses oriented FAST and rotated BRIEF (ORB) as part of its location-matching procedure. We employ multiscale local binary pattern (MSLBP) features to recognize places. We implement CamNav and conduct required experiments to compare the obtained accuracy when using ORB, the scale-invariant feature transform (SIFT), MSLBP features, and the combination of both ORB and SIFT features with MSLBP. A dataset composed of 42 classes was constructed for assessment. Each class contains 100 pictures designed for training one location and 24 pictures dedicated for testing. The evaluation results demonstrate that the place recognition accuracy while using MSLBP features is better than the accuracy when using SIFT features. The accuracy when using SIFT, MSLBP, and ORB features is 88.19%, 91.27%, and 96.33%, respectively. The overall accuracy of recognizing places increased to 93.55% and 97.52% after integrating MSLBP with SIFT with ORB, respectively.
AB - We present CamNav, a vision-based navigation system that provides users with indoor navigation services. CamNav captures images in real time while the user is walking to recognize their current location. It does not require any installation of indoor localization devices. In this paper, we describe the techniques of our system that improve the recognition accuracy of an existing system that uses oriented FAST and rotated BRIEF (ORB) as part of its location-matching procedure. We employ multiscale local binary pattern (MSLBP) features to recognize places. We implement CamNav and conduct required experiments to compare the obtained accuracy when using ORB, the scale-invariant feature transform (SIFT), MSLBP features, and the combination of both ORB and SIFT features with MSLBP. A dataset composed of 42 classes was constructed for assessment. Each class contains 100 pictures designed for training one location and 24 pictures dedicated for testing. The evaluation results demonstrate that the place recognition accuracy while using MSLBP features is better than the accuracy when using SIFT features. The accuracy when using SIFT, MSLBP, and ORB features is 88.19%, 91.27%, and 96.33%, respectively. The overall accuracy of recognizing places increased to 93.55% and 97.52% after integrating MSLBP with SIFT with ORB, respectively.
KW - Deep learning
KW - Indoor navigation
KW - Mobile technology
KW - Multiscale local binary pattern features
KW - ORB features
KW - Scene recognition
KW - SIFT features
KW - SVM
UR - http://www.scopus.com/inward/record.url?scp=85099177505&partnerID=8YFLogxK
U2 - 10.1007/s11227-020-03568-5
DO - 10.1007/s11227-020-03568-5
M3 - Article
AN - SCOPUS:85099177505
VL - 77
SP - 7737
EP - 7756
JO - Journal of Supercomputing
JF - Journal of Supercomputing
SN - 0920-8542
IS - 7
ER -