• 设为首页
  • 点击收藏
  • 手机版
    手机扫一扫访问
    迪恩网络手机版
  • 关注官方公众号
    微信扫一扫关注
    迪恩网络公众号

dongdonghy/global-localization-object-detection: a global localization system wi ...

原作者: [db:作者] 来自: 网络 收藏 邀请

开源软件名称(OpenSource Name):

dongdonghy/global-localization-object-detection

开源软件地址(OpenSource Url):

https://github.com/dongdonghy/global-localization-object-detection

开源编程语言(OpenSource Language):

Python 78.8%

开源软件介绍(OpenSource Introduction):

Global Localization Using Object Detection with Semantic Map

Overview

Global localization is a key problem in autonomous robot. We use semantic map and object detection to do the global localization with MLE method.

Dependencies

  • Jetpack=3.3
  • TensorFlow=1.9.0
  • Opencv
  • Matplotlib
  • PIL
  • ROS Kinetic

Hardware

  • Jetson TX2
  • Lidar
  • USB camera
  • Autonomous robot
  • Odometry by encoder or IMU

Motivation

  • In ROS system, if we use move_base package, we need to input an 2D initial pose by hand:
  • Therefore, we want to calculate the initial pose automatically.

How to Run

Object Detection Model

  • train an object detection model using tensorflow
  • export the frozen model, and put it into frozen_model folder
  • put the whole package into a ROS workspace

Semantic Map

  • we build a semantic map with Gmapping and object detection.
  • the backgroud is the grid map, and the points in the map represent the object position.

ROS prepration

Before initial pose, you need to run the following node in ROS

  • map server to output a map
  • robot control: publish the cmd_vel and subscribe Odometry
  • Lidar like Hokuyo, output the scan data

Global Localization

  • Run python initial_pose.py in scripts folder.
  • subscribe scan, imu/data topic, and need a USB camera
  • publish cmd/vel to rotation, webcam_image, and the final initialpose

Other function

  • camera_save.py: simple script to save the camera image
  • visilize.py: an example script to test the frozen model with a video
  • send_goal.cpp: we also provide a function which can send the navigation goal through voice recognition. Here we use the baidu package: https://github.com/DinnerHowe/baidu_speech
  • center path: you need to alter the grid_path.cpp and input your own path in global planner package of navation stack.



鲜花

握手

雷人

路过

鸡蛋
该文章已有0人参与评论

请发表评论

全部评论

专题导读
热门推荐
阅读排行榜

扫描微信二维码

查看手机版网站

随时了解更新最新资讯

139-2527-9053

在线客服(服务时间 9:00~18:00)

在线QQ客服
地址:深圳市南山区西丽大学城创智工业园
电邮:jeky_zhao#qq.com
移动电话:139-2527-9053

Powered by 互联科技 X3.4© 2001-2213 极客世界.|Sitemap