Wenxuan Guo1*
Xiuwei Xu1*
Hang Yin1
Ziwei Wang2
Jianjiang Feng1
Jie Zhou1
Jiwen Lu1
1Tsinghua University 2Nanyang Technological University
Paper (arXiv)
Code (GitHub)
If video does not load, click HERE to download.
Visual navigation with an image as goal is a fundamental and challenging problem. Conventional methods either rely on end-to-end RL learning or modular-based policy with topological graph or BEV map as memory, which cannot fully model the geometric relationship between the explored 3D environment and the goal image. In order to efficiently and accurately localize the goal image in 3D space, we build our navigation system upon the renderable 3D gaussian (3DGS) representation. However, due to the computational intensity of 3DGS optimization and the large search space of 6-DoF camera pose, directly leveraging 3DGS for image localization during agent exploration process is prohibitively inefficient. To this end, we propose IGL-Nav, an Incremental 3D Gaussian Localization framework for efficient and 3D-aware image-goal navigation. Specifically, we incrementally update the scene representation as new images arrive with feed-forward monocular prediction. Then we coarsely localize the goal by leveraging the geometric information for discrete space matching, which can be equivalent to efficient 3D convolution. When the agent is close to the goal, we finally solve the fine target pose with optimization via differentiable rendering. The proposed IGL-Nav outperforms existing state-of-the-art methods by a large margin across diverse experimental configurations. It can also handle the more challenging free-view image-goal setting and be deployed on real-world robotic platform using a cellphone to capture goal image at arbitrary pose.
IGL-Nav effectively guides the agent to reach free-view image goal via incremental 3D Gaussian localization.
Overall pipeline of IGL-Nav. We maintain an incremental 3D Gaussian Splatting (3DGS) representation with feed-forward prediction and perform coarse-to-fine goal localization using both 3D convolution and differentiable rendering.
Illustration of IGL-Nav. (a) We maintain an incremental 3DGS scene representation with feed-forward prediction. (b) The coarse target localization is modeled as a 5-dimension matching problem, which is efficiently implemented by leveraging the target embedding as 3D convolutional kernel. (c) Fine target localization via differentiable 3DGS rendering and matching-constrained optimization.
Modeling of the camera pose space. (a) Line LR is almost always parallel to the ground. (b) Line AO' is parallel to Plane XOY. Plane AO'B is perpendicular to Plane XOY.
Image-goal navigation benchmark (Habitat). IGL-Nav achieves state-of-the-art performance across all difficulty levels.
Free-view image-goal navigation results. IGL-Nav outperforms all baselines in both zero-shot and supervised settings.
Real-world deployment results of IGL-Nav. A robot is guided to a cellphone-captured goal image with free view, successfully reaching the target in complex indoor scenes.