We introduce PixLoc, a neural network that localizes a given image via direct feature alignment with a 3D model of the environment. PixLoc is trained end-to-end and is interpretable, accurate, and generalizes to new scenes and across domains, e.g. from outdoors to indoors. It is described in our paper:
PixLoc is built with Python >=3.6 and PyTorch. The package pixloc includes code for both training and evaluation. Installing the package locally also installs the minimal dependencies listed in requirements.txt:
git clone https://github.com/cvg/pixloc/
cd pixloc/
pip install -e .
Generating visualizations and animations requires extra dependencies that can be installed with:
pip install -e .[extra]
Paths to the datasets and to training and evaluation outputs are defined in pixloc/settings.py. The default structure is as follows:
.
├── datasets # public datasets
└── outputs
├── training # checkpoints and training logs
├── hloc # 3D models and retrieval for localization
└── results # outputs of the evaluation
Visualizations
Demo
Have a look at the Jupyter notebook demo.ipynb to localize an image and animate the predictions in 2D and 3D. This requires downloading the pre-trained weights and the data for either the Aachen Day-Night or Extended CMU Seasons datasets using:
The notebook visualize_confidences.ipynb shows how to visualize the confidences of the predictions over image sequences and turn them into videos.
Datasets
The codebase can evaluate PixLoc on the following datasets: 7Scenes, Cambridge Landmarks, Aachen Day-Night, Extended CMU Seasons, and RobotCar Seasons. Running the evaluation requires to download the following assets:
The given dataset;
Sparse 3D Structure-from-Motion point clouds and results of the image retrieval, both generated with our toolbox hloc and hosted here;
Weights of the model trained on CMU or MegaDepth, hosted here.
We provide a convenient script to download all assets for one or multiple datasets using:
(see --help for additional arguments like --CMU_slices)
Evaluation
To perform the localization on all queries of one of the supported datasets, simply launch the corresponding run script:
python -m pixloc.run_[7Scenes|Cambridge|Aachen|CMU|RobotCar] # choose one
Optional flags:
--results path_to_output_poses defaults to outputs/results/pixloc_[dataset_name].txt
--from_poses to refine the poses estimated by hloc rather than starting from reference poses
--inlier_ranking to run the oracle baseline using inliers counts of hloc
--scenes to select a subset of the scenes of the 7Scenes and Cambridge Landmarks datasets
any configuration entry can be overwritten from the command line thanks to OmegaConf. For example, to increase the number of reference images, add refinement.num_dbs=5.
This displays the evaluation metrics for 7Scenes and Cambridge, while the other datasets require uploading the poses to the evaluation server hosted at visuallocalization.net.
Training
Data preparation
[Click to expand]
The 3D point clouds, camera poses, and intrinsic parameters are preprocessed together to allow for fast data loading during training. These files are generated using the scripts pixloc/pixlib/preprocess_[cmu|megadepth].py. Such data is also hosted here and can be download via:
This also downloads the training split of the CMU dataset. The undistorted MegaDepth data (images) can be downloaded from the D2-Net repository.
Training experiment
[Click to expand]
The training framework and detailed usage instructions are described at pixloc/pixlib/. The training experiments are defined by configuration files for which examples are given at pixloc/pixlib/configs/. For example, the following command trains PixLoc on the CMU dataset:
To track the loss and evaluation metrics throughout training, we use Tensorboard:
tensorboard --logdir outputs/training/
Once the validation loss has saturated (around 20k-40k iterations), the training can be interrupted with Ctrl+C. All training experiments were conducted with a single RTX 2080 Ti NVIDIA GPU, but the code supports multi-GPU training for faster convergence.
Localizing requires calibrated and posed reference images as well as calibrated query images.
We also need a sparse SfM point clouds and a list of image pairs obtained with image retrieval. You can easily generate them using our toolbox hloc.
Taking the pixloc/run_Aachen.py as a template, we can copy the file structure of the Aachen dataset and/or adjust the variable default_paths, which stores local subpaths from DATA_PATH and LOC_PATH (defined in pixloc/settings.py).
We provide in viewer/ a simple web-based visualizer built with three.js. Quantities of interest (3D points, 2D projections, camera trajectories) are first written to a JSON file and then loaded in the front-end. The trajectory can be animated and individual frames captured to generate a video.
Geometry objects
[Click to expand]
We provide in pixloc/pixlib/geometry/wrappers.py PyTorch objects for representing SE(3) Poses and Camera models with lens distortion. With a torch.Tensor-like interface, these objects support batching, GPU computation, backpropagation, and operations over 3D and 2D points:
frompixloc.pixlib.geometryimportPose, CameraR# rotation matrix with shape (B,3,3)t# translation vector with shape (B,3)p3D# 3D points with shape (B,N,3)T_w2c=Pose.from_Rt(R, t)
T_w2c=T_w2c.cuda() # behaves like a torch.Tensorp3D_c=T_w2c*p3D# transform pointsT_A2C=T_B2C @ T_A2B# chain Pose objectscam1=Camera.from_colmap(dict) # from a COLMAP dictcam=torch.stack([cam1, cam2]) # batch Camera objectsp2D, mask=cam.world2image(p3D_c) # project and undistortJ, mask=cam.J_world2image(p3D_c) # Jacobian of the projection
Since the publication of the paper, we have substantially refactored the codebase, with many usability improvements and updated dependencies. As a consequence, the results of the evaluation and training might slightly deviate from (and often improve over) the original numbers found in our CVPR 2021 paper. If you are writing a paper, for consistency with the literature, please report the original numbers. If you are building on top of this codebase, consider reporting the latest numbers for fairness.
BibTex Citation
Please consider citing our work if you use any of the ideas presented the paper or code from this repo:
@inproceedings{sarlin21pixloc,
author = {Paul-Edouard Sarlin and
Ajaykumar Unagar and
Måns Larsson and
Hugo Germain and
Carl Toft and
Victor Larsson and
Marc Pollefeys and
Vincent Lepetit and
Lars Hammarstrand and
Fredrik Kahl and
Torsten Sattler},
title = {{Back to the Feature: Learning Robust Camera Localization from Pixels to Pose}},
booktitle = {CVPR},
year = {2021},
}
请发表评论