面向移動(dòng)機(jī)器人的全方位視覺系統(tǒng)關(guān)鍵技術(shù)研究
本文選題:移動(dòng)機(jī)器人 + 全方位視覺 ; 參考:《北方民族大學(xué)》2017年碩士論文
【摘要】:隨著科學(xué)技術(shù)的不斷發(fā)展,機(jī)器人技術(shù)在人類生產(chǎn)生活、軍事、救援和反恐等領(lǐng)域發(fā)揮著越來越重要的作用。機(jī)器人要完成預(yù)定的任務(wù)就需要獲取相關(guān)場(chǎng)景信息,由于機(jī)器人系統(tǒng)的可擴(kuò)展性,有很多方式可以為機(jī)器人提供決策信息,其中最主要的方式就是機(jī)器視覺。傳統(tǒng)的機(jī)器人視覺系統(tǒng)具有視角單一的缺點(diǎn),不能一次性獲取全局信息。但是,機(jī)器人在眾多實(shí)際應(yīng)用領(lǐng)域,需要超廣角甚至是360°全視角的圖像,以便獲取更多的視場(chǎng)信息。全方位視覺技術(shù)應(yīng)運(yùn)而生,由于它具有大視野的顯著特征,通過將全方位視覺技術(shù)運(yùn)用于機(jī)器人的視覺系統(tǒng)設(shè)計(jì)中,能夠?yàn)闄C(jī)器人提供更加全面和高效的工作場(chǎng)景信息,具有良好的應(yīng)用前景。魚眼鏡頭因其物理結(jié)構(gòu)特點(diǎn),使得它具有比普通鏡頭更大的視角。在相同面積下,魚眼鏡頭捕獲的圖像信息量更大。因此在相同視場(chǎng)信息要求下,所需魚眼鏡頭比普通鏡頭數(shù)量少,相應(yīng)拼接圖像數(shù)量減少,拼接次數(shù)也隨之減少,由此可以顯著降低系統(tǒng)對(duì)視頻圖像處理的計(jì)算量,提高系統(tǒng)的實(shí)時(shí)性。本文在分析傳統(tǒng)的全方位視覺系統(tǒng)原理的基礎(chǔ)上,采用魚眼鏡頭獲取機(jī)器人工作場(chǎng)景信息,再通過改進(jìn)的畸變校正和圖像拼接算法,進(jìn)而為機(jī)器人提供全方位、高效的場(chǎng)景信息。本文重點(diǎn)對(duì)以下兩個(gè)方面的內(nèi)容開展研究:(1)圖像算法研究:對(duì)原始畸變魚眼圖像進(jìn)行有效區(qū)域提取并校正,提出了改進(jìn)的基于經(jīng)度坐標(biāo)的魚眼圖像快速校正方法;對(duì)于校正后圖像,采用SURF算法進(jìn)行拼接,對(duì)拼接的過程設(shè)計(jì)與實(shí)現(xiàn)進(jìn)行優(yōu)化,最終得到全方位圖像;(2)仿真實(shí)驗(yàn)研究:利用專業(yè)的移動(dòng)機(jī)器人仿真平臺(tái)Webots,對(duì)移動(dòng)機(jī)器人和實(shí)驗(yàn)環(huán)境進(jìn)行仿真建模,并對(duì)所提圖像算法的效果進(jìn)行對(duì)比研究。將改進(jìn)的魚眼圖像校正和拼接算法移植于此仿真環(huán)境進(jìn)行實(shí)驗(yàn),驗(yàn)證系統(tǒng)方案的可行性。仿真實(shí)驗(yàn)結(jié)果表明,本文改進(jìn)的魚眼圖像校正算法與校正后圖像的拼接算法,實(shí)時(shí)性較高,能夠得到比較理想的全方位環(huán)視圖像,滿足了移動(dòng)機(jī)器人對(duì)全方位視覺系統(tǒng)的需求。
[Abstract]:With the development of science and technology, robot technology is playing a more and more important role in the fields of human production, military, rescue and anti-terrorism. The robot needs to obtain the relevant scene information in order to complete the predetermined task. Because of the expansibility of the robot system, there are many ways to provide the decision information for the robot, the most important way is machine vision. Traditional robot vision system has the disadvantage of single visual angle, so it can not obtain global information at one time. However, in many practical applications, robots need wide-angle or even 360 擄full-angle images in order to obtain more field of view information. Omnidirectional vision technology emerges as the times require. Because it has the remarkable characteristics of large field of vision, it can provide more comprehensive and efficient working scene information for robot by applying omni-directional vision technology to the design of robot vision system. It has good application prospect. Because of its physical structure, fish-eye lens has a larger angle of view than ordinary lens. In the same area, the fish-eye lens captured more image information. Therefore, under the same field of view information requirement, the number of fish-eye lenses required is less than that of ordinary shots, the corresponding number of stitched images is reduced, and the number of stitching times is also reduced, which can significantly reduce the calculation amount of video image processing in the system. Improve the real-time performance of the system. On the basis of analyzing the principle of traditional omnidirectional vision system, this paper uses the fish-eye lens to obtain the working scene information of the robot, and then through the improved distortion correction and image stitching algorithm, it provides the robot with omni-directional. Efficient scene information. This paper focuses on the following two aspects of the research on the image algorithm: extract and correct the effective region of the original distorted fish-eye image, and propose an improved fast correction method based on longitude coordinates for the fish-eye image; For the corrected image, the SURF algorithm is used for stitching, and the design and implementation of the stitching process are optimized. Finally, we get the omni-directional image simulation experimental research: we use the professional mobile robot simulation platform Webotsto simulate and model the mobile robot and the experimental environment, and compare the effect of the proposed image algorithm. The improved fish-eye image correction and stitching algorithm was transplanted to the simulation environment to verify the feasibility of the system. The simulation results show that the improved fish-eye image correction algorithm and the corrected image mosaic algorithm are more real-time and can obtain ideal omni-directional circle view image, which meets the needs of mobile robot for omni-directional vision system.
【學(xué)位授予單位】:北方民族大學(xué)
【學(xué)位級(jí)別】:碩士
【學(xué)位授予年份】:2017
【分類號(hào)】:TP391.41;TP242
【參考文獻(xiàn)】
相關(guān)期刊論文 前10條
1 馬朋飛;穆春陽;馬行;李柳群;;基于動(dòng)態(tài)圓的魚眼圖像校正算法研究[J];科學(xué)技術(shù)與工程;2017年07期
2 胥磊;;機(jī)器視覺技術(shù)的發(fā)展現(xiàn)狀與展望[J];設(shè)備管理與維修;2016年09期
3 朱慶生;陳治;張程;;基于自然鄰居流形排序圖像檢索技術(shù)研究[J];計(jì)算機(jī)應(yīng)用研究;2016年04期
4 楊歡;;一種基于視覺避障及導(dǎo)航功能的機(jī)器人設(shè)計(jì)[J];機(jī)電技術(shù);2015年03期
5 張軍;王志舟;楊正瓴;;單幅圓形魚眼圖像的校正[J];計(jì)算機(jī)應(yīng)用;2015年05期
6 程德強(qiáng);劉洋;柳雪;趙國;張佳琳;;分割線掃描魚眼圖像有效區(qū)域提取算法[J];電視技術(shù);2015年06期
7 王俊秀;孔令德;;基于特征點(diǎn)匹配的全景圖像拼接技術(shù)研究[J];軟件工程師;2014年11期
8 王健;張振海;李科杰;許濤;石志國;張東紅;邵海燕;張亮;;全景視覺系統(tǒng)發(fā)展與應(yīng)用[J];計(jì)算機(jī)測(cè)量與控制;2014年06期
9 郭雄飛;魯斌;李慶;薛晨陽;;基于線性特征的魚眼圖像校正方法[J];計(jì)算機(jī)科學(xué);2014年S1期
10 韓迎輝;;基于改進(jìn)掃描線逼近的魚眼圖輪廓提取算法的研究[J];電子器件;2013年06期
相關(guān)博士學(xué)位論文 前1條
1 馮為嘉;基于魚眼鏡頭的全方位視覺及全景立體球視覺研究[D];天津大學(xué);2012年
相關(guān)碩士學(xué)位論文 前1條
1 廖訓(xùn)佚;魚眼圖像全景拼接系統(tǒng)[D];重慶大學(xué);2009年
,本文編號(hào):1976970
本文鏈接:http://sikaile.net/kejilunwen/zidonghuakongzhilunwen/1976970.html