The triplet loss function has seen extensive use within person re-identification. Most works focus on either improving the mining algorithm or adding new terms to the loss function itself. Our work instead concentrates on two other core components of the triplet loss that have been under-researched. First, we improve the standard Euclidean distance with dynamic weights, which are selected based on the standard deviation of features across the batch. Second, we exploit channel attention via a squeeze and excitation unit in the backbone model to emphasise important features throughout all layers of the model. This ensures that the output feature vector is a better representation of the image, and is also more suitable to use within our dynamically weighted Euclidean distance function. We demonstrate that our alterations provide significant performance improvement across popular reidentification data sets, including almost 10% mAP improvement on the CUHK03 data set. The proposed model attains results competitive with many state-of-the-art person re-identification models.
|Number of pages||9|
|Journal||Journal of WSCG|
|Publication status||Published - 7 Oct 2019|
|Event||27th International Conference on Computer Graphics, Visualization and Computer Vision 2019 - Primavera Hotel and Congress Center, Plzen, Czech Republic|
Duration: 27 May 2019 → 31 May 2019