基于Kinect的移動機器人視覺SLAM研究
[Abstract]:Intelligent mobile robot requires autonomous navigation and positioning in a complex environment, while the simultaneous positioning and mapping (SLAM) is the premise and the key to realize the complete autonomous movement of the mobile robot. The visual-based SLAM technology is widely concerned by the researchers because of its low price, rich information and easy extraction of features. Because the Kinect camera can easily and quickly get the RGB-D information of the environment, it is widely used in the visual SLAM. At present, the mainstream visual SLAM scheme with RGB-D sensor is composed of image processing front end and pose optimization back end. In view of the real-time problem of the visual SLAM system, the paper mainly focuses on the front-end image processing part, and studies the key link of the real-time performance of the system and puts forward the improvement method. The image processing efficiency of the front end directly affects the real-time performance of the whole SLAM system, and the paper introduces the optical flow method to track the motion of the feature point in the image rapidly, and compared with the traditional feature matching method, and provides a combination method of the optical flow and the characteristic matching method. In the motion estimation part, the optical flow method is adopted to estimate the motion of the mobile robot in real time; in order to eliminate the accumulated error in the motion estimation process, a loop detection based on the feature matching method is adopted to increase the constraint of the position of the robot. In addition, in order to improve the subsequent pose optimization efficiency, a local loop and a random loop-back combination strategy is used in the loop detection process. In the rear-end pose optimization part, the robot pose and pose constraints are obtained according to the motion estimation and the loop detection part, and the global optimization of the robot pose is carried out by using the g2o algorithm. Based on the experimental results of the reference data set, the performance of the optical flow method and the characteristic matching method is analyzed and compared, and the operation efficiency is improved by 28.5% under the premise of ensuring the positioning accuracy of the SLAM system compared with the traditional feature matching method. And the real-time performance of the visual SLAM system is effectively improved. Finally, on-line experimental results of the actual scene show that the paper can estimate the robot's motion track and build the three-dimensional map of the indoor scene in real time.
【學(xué)位授予單位】:南昌大學(xué)
【學(xué)位級別】:碩士
【學(xué)位授予年份】:2017
【分類號】:TP391.41;TP242
【參考文獻】
相關(guān)期刊論文 前10條
1 蘭紅;周偉;齊彥麗;;動態(tài)背景下的稀疏光流目標(biāo)提取與跟蹤[J];中國圖象圖形學(xué)報;2016年06期
2 付夢印;呂憲偉;劉彤;楊毅;李星河;李玉;;基于RGB-D數(shù)據(jù)的實時SLAM算法[J];機器人;2015年06期
3 張毅;陳起;羅元;;室內(nèi)環(huán)境下移動機器人三維視覺SLAM[J];智能系統(tǒng)學(xué)報;2015年04期
4 楊志娟;袁縱橫;喬宇;代毅;胡放榮;;基于圖像處理的指針式儀表智能識別方法研究[J];計算機測量與控制;2015年05期
5 肖雨;崔榮一;懷麗波;;一種融合位置信息的字符串相似度度量方法[J];計算機應(yīng)用研究;2015年11期
6 王亞龍;張奇志;周亞麗;;基于RGB-D相機的室內(nèi)環(huán)境3D地圖創(chuàng)建[J];計算機應(yīng)用研究;2015年08期
7 辛菁;茍蛟龍;馬曉敏;黃凱;劉丁;張友民;;基于Kinect的移動機器人大視角3維V-SLAM[J];機器人;2014年05期
8 王亞龍;張奇志;周亞麗;;基于Kinect的三維視覺里程計的設(shè)計[J];計算機應(yīng)用;2014年08期
9 陳添丁;胡鑒;吳滌;;稀疏光流快速計算的動態(tài)目標(biāo)檢測與跟蹤[J];中國圖象圖形學(xué)報;2013年12期
10 梁明杰;閔華清;羅榮華;;基于圖優(yōu)化的同時定位與地圖創(chuàng)建綜述[J];機器人;2013年04期
相關(guān)博士學(xué)位論文 前3條
1 韓震峰;面向煤礦井下探測的多節(jié)履帶式機器人及其關(guān)鍵技術(shù)研究[D];哈爾濱工業(yè)大學(xué);2012年
2 唐利民;非線性最小二乘的不適定性及算法研究[D];中南大學(xué);2011年
3 余洪山;移動機器人地圖創(chuàng)建和自主探索方法研究[D];湖南大學(xué);2007年
相關(guān)碩士學(xué)位論文 前6條
1 張彥珍;基于g2o的SLAM后端優(yōu)化算法研究[D];西安電子科技大學(xué);2014年
2 夏文玲;基于Kinect與單目視覺SLAM的實時三維重建算法的實現(xiàn)[D];湖南大學(xué);2013年
3 鄭馳;基于光流法的單目視覺里程計研究[D];浙江大學(xué);2013年
4 江龍;基于SURF特征的單目視覺SLAM技術(shù)研究與實現(xiàn)[D];南京理工大學(xué);2012年
5 袁亮;三維重建過程中的點云數(shù)據(jù)配準(zhǔn)算法的研究[D];西安電子科技大學(xué);2010年
6 唐永鶴;基于特征點的圖像匹配算法研究[D];國防科學(xué)技術(shù)大學(xué);2007年
,本文編號:2499510
本文鏈接:http://sikaile.net/kejilunwen/zidonghuakongzhilunwen/2499510.html