基于RGB-D的SLAM方法改進(jìn)研究
發(fā)布時(shí)間:2018-11-06 16:19
【摘要】:SLAM(Simultaneous Localization and Mapping,同時(shí)定位與地圖創(chuàng)建)是公認(rèn)的實(shí)現(xiàn)機(jī)器人自主導(dǎo)航的核心環(huán)節(jié),也是目前最具挑戰(zhàn)性的課題之一。以Kinect為代表的RGB-D傳感器,不僅能采集周圍環(huán)境的彩色信息還能直接獲取其對應(yīng)的深度信息,數(shù)據(jù)處理過程簡單,適合用于三維地圖重建;赗GB-D傳感器進(jìn)行的SLAM研究被稱為RGB-D SLAM,是目前機(jī)器人自主導(dǎo)航領(lǐng)域較為火熱的一個(gè)研究課題。為了解決原有RGB-D SLAM方法中存在的效率低和誤差大等問題,本文分別對其流程的前端和后端進(jìn)行了研究及改進(jìn),得到一種準(zhǔn)確性、魯棒性和實(shí)時(shí)性較好的RGB-D SLAM方法。其具體研究成果如下:首先,研究了用于采集周圍環(huán)境中RGB-D信息的Kinect傳感器的工作原理、內(nèi)外參模型以及標(biāo)定方法。用Matlab里的聯(lián)合標(biāo)定工具箱,對Kinect的彩色鏡頭和深度鏡頭進(jìn)行標(biāo)定對齊,并將標(biāo)定對齊前后得到的點(diǎn)云圖片進(jìn)行比較分析,以驗(yàn)證標(biāo)定有助于提高RGB圖片像素點(diǎn)和Depth圖片像素點(diǎn)的正確匹配率。其次,基于對RGB-D SLAM流程前端的幾個(gè)環(huán)節(jié),即特征檢測與描述子提取、特征匹配、錯(cuò)誤匹配剔除、運(yùn)動(dòng)變換估計(jì)和運(yùn)動(dòng)變換優(yōu)化的研究,提出一種將雙相匹配法與閾值法相結(jié)合的改進(jìn)的錯(cuò)誤匹配剔除算法,該算法用時(shí)更少(用于SIFT、SURF和ORB算法上,分別減少了14.3%、14.7%和58.6%),同時(shí)保留的正確匹配點(diǎn)數(shù)目更多(用于SIFT、SURF和ORB算法上,分別增加了5.7%、34.7%和26.9%)。然后,基于對RGB-D SLAM方法后端的幾個(gè)環(huán)節(jié),即位姿圖的生成、閉環(huán)檢測、位姿圖的優(yōu)化、運(yùn)動(dòng)軌跡和三維點(diǎn)云地圖生成的研究,提出一種將近距離逐幀閉環(huán)檢測、遠(yuǎn)距離隨機(jī)閉環(huán)檢測以及BoVW的思想相結(jié)合的改進(jìn)閉環(huán)檢測算法,基于該算法生成的位姿圖更為整潔且消耗的時(shí)間更少。最后,采用公開數(shù)據(jù)集Computer Vision Group和相應(yīng)的結(jié)果評估工具對改進(jìn)前后的RGB-D SLAM方法進(jìn)行了評估,驗(yàn)證了該改進(jìn)RGB-D SLAM系統(tǒng)在構(gòu)建地圖的準(zhǔn)確性和實(shí)時(shí)性上均有所提高。另外,用Turtlebot機(jī)器人搭載Kinect進(jìn)行了場地實(shí)驗(yàn),該系統(tǒng)可以在機(jī)器人運(yùn)行的過程中生成(不斷更新)位姿圖和三維點(diǎn)云地圖,驗(yàn)證了該改進(jìn)RGB-D SLAM方法的魯棒性和有效性。
[Abstract]:SLAM (Simultaneous Localization and Mapping, simultaneous location and map creation) is recognized as the core of autonomous navigation for robots, and it is also one of the most challenging topics at present. The RGB-D sensor represented by Kinect can not only collect the color information of the surrounding environment but also obtain the corresponding depth information directly. The process of data processing is simple and suitable for 3D map reconstruction. The research of SLAM based on RGB-D sensor is called RGB-D SLAM, which is a hot research topic in the field of robot autonomous navigation. In order to solve the problems of low efficiency and large error in the original RGB-D SLAM method, the front-end and back-end of the process are studied and improved in this paper, and an accurate, robust and real-time RGB-D SLAM method is obtained. The specific research results are as follows: firstly, the working principle, internal and external parameter model and calibration method of Kinect sensor used to collect RGB-D information in the surrounding environment are studied. Using the joint calibration toolbox in Matlab, the color lens and depth lens of Kinect are calibrated, and the point cloud images before and after calibration are compared and analyzed. Verification and calibration are helpful to improve the correct matching rate between RGB image pixels and Depth image pixels. Secondly, based on the research of several links in the front end of RGB-D SLAM process, such as feature detection and descriptor extraction, feature matching, error matching and culling, motion transform estimation and motion transformation optimization. This paper presents an improved error matching elimination algorithm which combines the biphasic matching method with the threshold method. The algorithm takes less time (14.37% for SIFT,SURF and 58.6% for ORB). At the same time, the number of correct matching points retained is more (for SIFT,SURF and ORB algorithms, an increase of 5. 7% and 26. 9% respectively). Then, based on the research of several links in the back-end of RGB-D SLAM method, the generation of the spot pose graph, the closed-loop detection, the optimization of the pose map, the motion path and the 3D point cloud map generation, a close-loop detection based on the close distance frame by frame is proposed. Based on the improved closed loop detection algorithm based on the combination of long distance random closed loop detection and BoVW, the position and pose graph generated by the algorithm is cleaner and consumes less time. Finally, the RGB-D SLAM method before and after the improvement is evaluated by using the open data set Computer Vision Group and the corresponding result evaluation tools, and the accuracy and real-time performance of the improved RGB-D SLAM system are proved to be improved. In addition, the Turtlebot robot is used to carry on the field experiment with Kinect. The system can generate (update) the pose map and 3D point cloud map while the robot is running. The robustness and effectiveness of the improved RGB-D SLAM method are verified.
【學(xué)位授予單位】:哈爾濱工業(yè)大學(xué)
【學(xué)位級別】:碩士
【學(xué)位授予年份】:2017
【分類號(hào)】:TP242
[Abstract]:SLAM (Simultaneous Localization and Mapping, simultaneous location and map creation) is recognized as the core of autonomous navigation for robots, and it is also one of the most challenging topics at present. The RGB-D sensor represented by Kinect can not only collect the color information of the surrounding environment but also obtain the corresponding depth information directly. The process of data processing is simple and suitable for 3D map reconstruction. The research of SLAM based on RGB-D sensor is called RGB-D SLAM, which is a hot research topic in the field of robot autonomous navigation. In order to solve the problems of low efficiency and large error in the original RGB-D SLAM method, the front-end and back-end of the process are studied and improved in this paper, and an accurate, robust and real-time RGB-D SLAM method is obtained. The specific research results are as follows: firstly, the working principle, internal and external parameter model and calibration method of Kinect sensor used to collect RGB-D information in the surrounding environment are studied. Using the joint calibration toolbox in Matlab, the color lens and depth lens of Kinect are calibrated, and the point cloud images before and after calibration are compared and analyzed. Verification and calibration are helpful to improve the correct matching rate between RGB image pixels and Depth image pixels. Secondly, based on the research of several links in the front end of RGB-D SLAM process, such as feature detection and descriptor extraction, feature matching, error matching and culling, motion transform estimation and motion transformation optimization. This paper presents an improved error matching elimination algorithm which combines the biphasic matching method with the threshold method. The algorithm takes less time (14.37% for SIFT,SURF and 58.6% for ORB). At the same time, the number of correct matching points retained is more (for SIFT,SURF and ORB algorithms, an increase of 5. 7% and 26. 9% respectively). Then, based on the research of several links in the back-end of RGB-D SLAM method, the generation of the spot pose graph, the closed-loop detection, the optimization of the pose map, the motion path and the 3D point cloud map generation, a close-loop detection based on the close distance frame by frame is proposed. Based on the improved closed loop detection algorithm based on the combination of long distance random closed loop detection and BoVW, the position and pose graph generated by the algorithm is cleaner and consumes less time. Finally, the RGB-D SLAM method before and after the improvement is evaluated by using the open data set Computer Vision Group and the corresponding result evaluation tools, and the accuracy and real-time performance of the improved RGB-D SLAM system are proved to be improved. In addition, the Turtlebot robot is used to carry on the field experiment with Kinect. The system can generate (update) the pose map and 3D point cloud map while the robot is running. The robustness and effectiveness of the improved RGB-D SLAM method are verified.
【學(xué)位授予單位】:哈爾濱工業(yè)大學(xué)
【學(xué)位級別】:碩士
【學(xué)位授予年份】:2017
【分類號(hào)】:TP242
【參考文獻(xiàn)】
相關(guān)期刊論文 前10條
1 林輝燦;呂強(qiáng);張洋;馬建業(yè);;稀疏和稠密的VSLAM的研究進(jìn)展[J];機(jī)器人;2016年05期
2 居青;房芳;馬旭東;;基于RGB-D傳感器的移動(dòng)機(jī)器人目標(biāo)跟蹤系統(tǒng)設(shè)計(jì)與實(shí)現(xiàn)[J];工業(yè)控制計(jì)算機(jī);2016年04期
3 楊娜;李漢舟;;服務(wù)機(jī)器人導(dǎo)航技術(shù)研究進(jìn)展[J];機(jī)電工程;2015年12期
4 張毅;杜凡宇;羅元;熊艷;;一種融合激光和深度視覺傳感器的SLAM地圖創(chuàng)建方法[J];計(jì)算機(jī)應(yīng)用研究;2016年10期
5 付夢印;呂憲偉;劉彤;楊毅;李星河;李玉;;基于RGB-D數(shù)據(jù)的實(shí)時(shí)SLAM算法[J];機(jī)器人;2015年06期
6 薛永勝;王Y,
本文編號(hào):2314809
本文鏈接:http://sikaile.net/kejilunwen/zidonghuakongzhilunwen/2314809.html
最近更新
教材專著