天堂国产午夜亚洲专区-少妇人妻综合久久蜜臀-国产成人户外露出视频在线-国产91传媒一区二区三区

當(dāng)前位置:主頁 > 科技論文 > 自動化論文 >

大尺度環(huán)境的多機(jī)器人視覺激光同步定位與制圖研究

發(fā)布時間:2018-11-16 12:49
【摘要】:移動機(jī)器人的同步定位與制圖(SLAM,Simultaneous Localization and Mapping)是移動機(jī)器人在未知環(huán)境下工作的核心技術(shù)。隨著機(jī)器人的技術(shù)快速發(fā)展,研究人員提出了很多優(yōu)秀的SLAM算法,但大都集中在單機(jī)器人上,當(dāng)機(jī)器人工作在環(huán)境規(guī)模比較大或者環(huán)境比較復(fù)雜的條件下時,單機(jī)器人就無法穩(wěn)定的實現(xiàn)SLAM。本文提出了基于激光和視覺的多機(jī)器人SLAM算法,來解決單機(jī)器人效率低、任務(wù)量小、系統(tǒng)魯棒性弱的問題。雖然多機(jī)器人SLAM可以有效解決單機(jī)器人的問題,但是它也面臨了單機(jī)器人所沒有的挑戰(zhàn)。首先機(jī)器人不知道各自初始相對位置關(guān)系,導(dǎo)致無法直接建立機(jī)器人之間的聯(lián)系,進(jìn)而無法預(yù)知合適的策略將多個機(jī)器人地圖融合成一張完整性和連續(xù)性的地圖。其次,每個機(jī)器人的定位都存在累積誤差,這些累積誤差在地圖融合后會疊加,從而導(dǎo)致融合后的地圖出錯,如何消除這些誤差的影響也是一個難點(diǎn)。針對以上問題,本文提出了一種基于激光和視覺融合的方法,來實現(xiàn)多機(jī)器人SLAM算法。激光SLAM具有實時性高和建圖精確的優(yōu)點(diǎn),但是當(dāng)機(jī)器人處于環(huán)境結(jié)構(gòu)比較相似或者比較復(fù)雜的環(huán)境中,只依靠激光很容易做出錯誤的閉環(huán)檢測,進(jìn)而導(dǎo)致建圖效果不理想。而視覺具有信息豐富的特點(diǎn)和快速場景識別能力,所以本文提出了采用視覺輔助激光的方法來幫助機(jī)器人進(jìn)行閉環(huán)檢測,從而消除單機(jī)器人累積誤差對于地圖創(chuàng)建和定位的影響。此外,多個機(jī)器人通過TCP/IP Socket交換各自的視覺信息,彼此之間利用視覺信息,建立與其他機(jī)器人之間的聯(lián)系。當(dāng)一個機(jī)器人走到另一個機(jī)器人的曾經(jīng)走過的軌跡時,通過本文采用的視覺場景識別算法,就可以建立多機(jī)器人位姿圖節(jié)點(diǎn)之間的位姿約束關(guān)系。以這些節(jié)點(diǎn)為橋梁,通過隨機(jī)抽樣一致性和最小二乘算法,我們就可以求出多個機(jī)器人坐標(biāo)系之間的變換關(guān)系,通過這個變換關(guān)系將多個機(jī)器人的位姿圖變換到一個坐標(biāo)系下,從而完成位姿圖的融合。融合后的位姿圖用高斯牛頓迭代算法進(jìn)行優(yōu)化可以矯正累積誤差的影響,最后結(jié)合每個節(jié)點(diǎn)的激光數(shù)據(jù),就可以生成全局柵格地圖。無論單機(jī)器人的閉環(huán)檢測還是多機(jī)器人約束建立都用到視覺的場景識別算法,本文為了提高場景識別算法的效率,采用ORB特征和視覺詞袋技術(shù),使用視覺單詞表示圖像提取的ORB特征,并根據(jù)視覺單詞建立查詢表文件,由于匹配查詢表文件非常迅速,從而提高了匹配效率,保證場景識別算法的實時性。為了驗證本文所提算法的有效性,本文搭建了基于Turtlebot的多機(jī)器人實驗平臺,并在在不同的實驗環(huán)境下進(jìn)行了多組實驗,多個機(jī)器人融合后地圖與真實環(huán)境基本一致,從而驗證了算法的有效性和可行性。本文所獲得的成果可以廣泛的應(yīng)用于家庭服務(wù)機(jī)器人、物流機(jī)器人、無人機(jī)等領(lǐng)域,對于機(jī)器人的在我國應(yīng)用推廣有較大的意義。
[Abstract]:Synchronous location and Mapping (SLAM,Simultaneous Localization and Mapping) is the core technology of mobile robot working in unknown environment. With the rapid development of robot technology, researchers have proposed a lot of excellent SLAM algorithms, but most of them are concentrated on single robot, when the robot works under the condition of large scale or complex environment. Single robot can't implement SLAM. stably In this paper, a multi-robot SLAM algorithm based on laser and vision is proposed to solve the problems of low efficiency, small task and weak system robustness of a single robot. Although multi-robot SLAM can effectively solve the problem of single robot, it also faces a challenge that single robot does not. First, the robot does not know their initial relative position relationship, which makes it impossible to establish the relationship between the robots directly, and then it is impossible to predict the appropriate strategy to fuse multiple robot maps into a map of integrity and continuity. Secondly, there are accumulative errors in each robot's location, which will be superimposed after map fusion, which leads to the map error after fusion. How to eliminate the influence of these errors is also a difficulty. In order to solve the above problems, this paper proposes a method based on laser and vision fusion to realize the SLAM algorithm of multiple robots. Laser SLAM has the advantages of high real-time and accurate mapping, but when the robot is in a similar or more complex environment, it is easy to make the wrong closed-loop detection only by the laser, which leads to the poor mapping effect. The vision has the characteristic of rich information and the ability of fast scene recognition, so this paper proposes a method of visual assisted laser to help the robot to carry out closed-loop detection. Thus, the effect of accumulated errors on map creation and location is eliminated. In addition, multiple robots exchange their own visual information through TCP/IP Socket, and use visual information between each other to establish a relationship with other robots. When one robot walks to the track of another robot, the position and pose constraint relationship between the nodes of multi-robot pose map can be established by the visual scene recognition algorithm used in this paper. Taking these nodes as bridges, we can find out the transformation relationship between multiple robot coordinate systems by random sampling consistency and least square algorithm. By this transformation relation, we can transform the pose map of multiple robots into one coordinate system. In order to complete the fusion of the pose map. The fused pose map can be optimized by using Gao Si Newton iteration algorithm to correct the influence of cumulative error. Finally, the global raster map can be generated by combining the laser data of each node. In order to improve the efficiency of the scene recognition algorithm, ORB feature and visual word bag technology are adopted in this paper, in order to improve the efficiency of the scene recognition algorithm, both the closed-loop detection of single robot and the establishment of multi-robot constraints are used in the scene recognition algorithm. The visual words are used to represent the ORB features extracted from the image, and the query table files are established according to the visual words. Because the matching query table files are very fast, the matching efficiency is improved and the real-time of scene recognition algorithm is ensured. In order to verify the validity of the proposed algorithm, a multi-robot experimental platform based on Turtlebot is set up in this paper, and many experiments are carried out in different experimental environments. The map of multi-robot fusion is basically consistent with the real environment. The validity and feasibility of the algorithm are verified. The results obtained in this paper can be widely used in the fields of home service robot, logistics robot, UAV and so on, and have great significance for the application and promotion of robot in our country.
【學(xué)位授予單位】:哈爾濱工業(yè)大學(xué)
【學(xué)位級別】:碩士
【學(xué)位授予年份】:2017
【分類號】:TP242

【參考文獻(xiàn)】

相關(guān)期刊論文 前4條

1 馬家辰;張琦;謝瑋;馬立勇;;基于粒子群優(yōu)化的移動機(jī)器人SLAM方法[J];北京理工大學(xué)學(xué)報;2013年11期

2 譚民;王碩;;機(jī)器人技術(shù)研究進(jìn)展[J];自動化學(xué)報;2013年07期

3 湯一平;葉永杰;朱藝華;顧校凱;;智能全方位視覺傳感器及其應(yīng)用研究[J];傳感技術(shù)學(xué)報;2007年06期

4 戴博,肖曉明,蔡自興;移動機(jī)器人路徑規(guī)劃技術(shù)的研究現(xiàn)狀與展望[J];控制工程;2005年03期

相關(guān)碩士學(xué)位論文 前3條

1 鐘進(jìn);基于路徑圖優(yōu)化的多機(jī)器人同步定位與制圖研究[D];哈爾濱工業(yè)大學(xué);2014年

2 吳慶生;基于時差法的激光測距方法與應(yīng)用[D];東北石油大學(xué);2014年

3 張寶先;基于異構(gòu)傳感器的多機(jī)器人SLAM地圖融合研究[D];哈爾濱工業(yè)大學(xué);2013年



本文編號:2335560

資料下載
論文發(fā)表

本文鏈接:http://sikaile.net/kejilunwen/zidonghuakongzhilunwen/2335560.html


Copyright(c)文論論文網(wǎng)All Rights Reserved | 網(wǎng)站地圖 |

版權(quán)申明:資料由用戶3f3ed***提供,本站僅收錄摘要或目錄,作者需要刪除請E-mail郵箱bigeng88@qq.com