TY - GEN
T1 - Where is my Wallet? Modeling Object Proposal Sets for Egocentric Visual Query Localization
AU - Xu, Mengmeng
AU - Li, Yanghao
AU - Fu, Cheng-Yang
AU - Ghanem, Bernard
AU - Xiang, Tao
AU - Pérez-Rúa, Juan-Manuel
N1 - KAUST Repository Item: Exported on 2023-08-29
Acknowledgements: MX and BG are supported by the KAUST Office of Sponsored Research (OSR) through the VCC funding and SDAIA-KAUST Center of Excellence in Data Science and Artificial Intelligence (SDAIA-KAUST AI).
PY - 2023/8/22
Y1 - 2023/8/22
N2 - This paper deals with the problem of localizing objects in image and video datasets from visual exemplars. In particular, we focus on the challenging problem of egocentric visual query localization. We first identify grave implicit biases in current query-conditioned model design and visual query datasets. Then, we directly tackle such biases at both frame and object set levels. Concretely, our method solves these issues by expanding limited annotations and dynamically dropping object proposals during training. Additionally, we propose a novel transformer-based module that allows for object-proposal set context to be considered while incorporating query information. We name our module Conditioned Contextual Transformer or CocoFormer. Our experiments show the proposed adaptations improve egocentric query detection, leading to a better visual query localization system in both 2D and 3D configurations. Thus, we can improve frame-level detection performance from 26.28% to 31.26% in AP, which correspondingly improves the VQ2D and VQ3D localization scores by significant margins. Our improved context-aware query object detector ranked first and second respectively in the VQ2D and VQ3D tasks in the 2nd Ego4D challenge. In addition to this, we showcase the relevance of our proposed model in the Few-Shot Detection (FSD) task, where we also achieve SOTA results.
AB - This paper deals with the problem of localizing objects in image and video datasets from visual exemplars. In particular, we focus on the challenging problem of egocentric visual query localization. We first identify grave implicit biases in current query-conditioned model design and visual query datasets. Then, we directly tackle such biases at both frame and object set levels. Concretely, our method solves these issues by expanding limited annotations and dynamically dropping object proposals during training. Additionally, we propose a novel transformer-based module that allows for object-proposal set context to be considered while incorporating query information. We name our module Conditioned Contextual Transformer or CocoFormer. Our experiments show the proposed adaptations improve egocentric query detection, leading to a better visual query localization system in both 2D and 3D configurations. Thus, we can improve frame-level detection performance from 26.28% to 31.26% in AP, which correspondingly improves the VQ2D and VQ3D localization scores by significant margins. Our improved context-aware query object detector ranked first and second respectively in the VQ2D and VQ3D tasks in the 2nd Ego4D challenge. In addition to this, we showcase the relevance of our proposed model in the Few-Shot Detection (FSD) task, where we also achieve SOTA results.
UR - http://hdl.handle.net/10754/693775
UR - https://ieeexplore.ieee.org/document/10203505/
U2 - 10.1109/cvpr52729.2023.00255
DO - 10.1109/cvpr52729.2023.00255
M3 - Conference contribution
BT - 2023 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)
PB - IEEE
ER -