在线时间:8:00-16:00
迪恩网络APP
随时随地掌握行业动态
扫描二维码
关注迪恩网络微信公众号
开源软件名称(OpenSource Name):Liumouliu/OriCNN开源软件地址(OpenSource Url):https://github.com/Liumouliu/OriCNN开源编程语言(OpenSource Language):Python 96.4%开源软件介绍(OpenSource Introduction):Lending Orientation to Neural Networks for Cross-view Geo-localizationThis contains the ACT dataset and codes for training cross-view geo-localization method described in: Lending Orientation to Neural Networks for Cross-view Geo-localization, CVPR2019. AbstractThis paper studies image-based geo-localization (IBL) problem using ground-to-aerial cross-view matching. The goal is to predict the spatial location of a ground-level query image by matching it to a large geotagged aerial image database (e.g., satellite imagery). This is a challenging task due to the drastic differences in their viewpoints and visual appearances. Existing deep learning methods for this problem have been focused on maximizing feature similarity between spatially closeby image pairs, while minimizing other images pairs which are far apart. They do so by deep feature embedding based on visual appearance in those ground-and-aerial images. However, in everyday life, humans commonly use orientation information as an important cue for the task of spatial localization. Inspired by this insight, this paper proposes a novel method which endows deep neural networks with the commonsense of orientation. Given a ground-level spherical panoramic image as query input (and a large geo-referenced satellite image database), we design a Siamese network which explicitly encodes the orientation (i.e., spherical directions) of each pixel of the images. Our method significantly boosts the discriminative power of the learned deep features, leading to a much higher recall and precision outperforming all previous methods. Our network is also more compact using only 1/5th number of parameters than a previously best-performing network. To evaluate the generalization of our method, we also created a large-scale cross-view localization benchmark containing 100K geotagged ground-aerial pairs covering a geographic area of 300 square miles. ACT datasetOur ACT dataset is targgeted for fine-grain and city-scale cross-view localization. The ground-view images are panoramas, and the overhead images are satellite images. ACT dataset densely cover the Canberra city, and a sample cross-view pair is depicted as below. Our ACT dataset has two subsets (Contact me for the dataset, [email protected]):
To download the dataset, I would suggest using wget. For example: wget --continue --progress=dot:mega --tries=0 THE_LINK_I_SEND_YOU The suffix of downloaded zip file is tar.gz If you fail to extract the compressed files on Ubuntu, a convenient way to solve the problem is using WinRAR on a Windows PC Note that the dataset is ONLY permitted to be used for research. Don't distribute. Codes and ModelsOverviewOur model is implemented in Tensorflow 1.4.0. Other tensorflow versions should be OK. All our models are trained from scratch, so please run the training codes to obtain models. For pre-trained model on CVUSA dataset, please download CVUSA_model For pre-trained model on CVACT dataset, please download CVACT_model In the above CVUSA_model and CVACT_model, we also include the pre-extracted feature embeddings, in case you want to directly use them. Some may want to know how the training preformance improves along with epoches, please refer to recalls_epoches_CVUSA and recalls_epoches_CVACT. Some may want to know how the cross-view orientations are defined, please refer to ground_view_orientations and satellite_view_orientations Codes for CVUSA datasetIf you want to use CVUSA dataset, first download it, and then modify the img_root variable in input_data_rgb_ori_m1_1_augument.py (line 12) For example: img_root = '..../..../CVUSA/' For training, run: python train_deep6_scratch_m1_1_concat3conv_rgb_ori_gem_augment.py For testing, run: python eval_deep6_scratch_m1_1_concat3conv_rgb_ori_gem_augment.py Recall@1% is automatically calculated after running the evaluation script, and is saved to PreTrainModel folder. To calculate the recall@N figures, you need to use the extracted feature embeddings, and run the matlab script RecallN.m. You also need to change the path (variable desc_path) to your descriptor file. Codes for CVACT datasetMost of the steps for ACT dataset are the same as CVUSA dataset. The differences are:
That is to say: change the first line to from input_data_ACT_test import InputData
PublicationIf you like, you can cite our following publication: Liu Liu; Hongdong Li. Lending Orientation to Neural Networks for Cross-view Geo-localization. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), June 2019. @InProceedings{Liu_2019_CVPR, author = {Liu, Liu and Li, Hongdong}, title = {Lending Orientation to Neural Networks for Cross-view Geo-localization}, booktitle = {The IEEE Conference on Computer Vision and Pattern Recognition (CVPR)}, month = {June}, year = {2019} } and also the following prior works:
ContactIf you have any questions, drop me an email ([email protected]) |
2023-10-27
2022-08-15
2022-08-17
2022-09-23
2022-08-13
请发表评论