基于未標(biāo)定相機(jī)雙目視覺的3D重構(gòu)技術(shù)研究
本文選題:雙目視覺 切入點(diǎn):三維重構(gòu) 出處:《哈爾濱工業(yè)大學(xué)》2017年碩士論文 論文類型:學(xué)位論文
【摘要】:基于未標(biāo)定相機(jī)雙目視覺的3D重構(gòu)技術(shù)就是利用雙目視覺測(cè)量原理,使用處于不同位置的參數(shù)未知的兩相機(jī)或同一相機(jī)經(jīng)旋轉(zhuǎn)、平移拍攝同一場(chǎng)景,獲取同一場(chǎng)景的兩幅圖像,根據(jù)兩圖像的視差來(lái)建立場(chǎng)景的三維模型。由于成像設(shè)備具有價(jià)格低廉,應(yīng)用范圍廣等優(yōu)點(diǎn),基于雙目視覺的3D重構(gòu)技術(shù)已經(jīng)成為三維重構(gòu)的重要研究方法。如何正確標(biāo)定相機(jī)、提高圖像匹配的精度、精確估計(jì)基礎(chǔ)矩陣、求解精確的空間三維點(diǎn)坐標(biāo),是其中的研究重點(diǎn)與難點(diǎn)。本文針對(duì)上述幾個(gè)問(wèn)題進(jìn)行了研究。首先,詳細(xì)介紹了相機(jī)的標(biāo)定方法。針對(duì)標(biāo)準(zhǔn)測(cè)試圖像,運(yùn)用基于滅點(diǎn)的相機(jī)標(biāo)定法,利用Hough變換,就可以在無(wú)需知道相機(jī)的運(yùn)動(dòng)姿態(tài)和相關(guān)參數(shù)的情況下實(shí)現(xiàn)相機(jī)的標(biāo)定;針對(duì)自拍攝圖像,采用張氏平面模板標(biāo)定法,運(yùn)用標(biāo)定模板,提高了標(biāo)定精度。其次,比較SIFT算法和SURF算法的性能,在時(shí)間要求不高的情況下,從匹配點(diǎn)成功對(duì)數(shù)方面考慮,采用SIFT算法作為后續(xù)工作的研究基礎(chǔ),提出一種改進(jìn)的誤匹配剔除方法。在求取基礎(chǔ)矩陣之前,剔除一些明顯的誤匹配點(diǎn)對(duì),提高了基礎(chǔ)矩陣的求解精度。然后,提出了一種改進(jìn)的基礎(chǔ)矩陣魯棒性估計(jì)方法,該方法結(jié)合RANSAC算法與M-Estimator估計(jì)算法的優(yōu)點(diǎn),能夠更加精確的估計(jì)基礎(chǔ)矩陣,使用估計(jì)結(jié)果約束匹配點(diǎn)對(duì),再次剔除誤匹配點(diǎn)對(duì),提高了特征點(diǎn)對(duì)的匹配精度。最后,采用運(yùn)動(dòng)恢復(fù)的方法,結(jié)合相機(jī)內(nèi)參與基礎(chǔ)矩陣,恢復(fù)相機(jī)的旋轉(zhuǎn)矩陣和平移向量。針對(duì)標(biāo)準(zhǔn)測(cè)試圖像,求解空間三維點(diǎn)坐標(biāo),根據(jù)場(chǎng)景深度一致性,剔除嚴(yán)重偏離主場(chǎng)景的匹配點(diǎn)對(duì);為更清晰的表達(dá)場(chǎng)景幾何結(jié)構(gòu)信息,對(duì)空間三維點(diǎn)進(jìn)行三角剖分;為恢復(fù)具有真實(shí)感的三維場(chǎng)景,利用OpenGL實(shí)現(xiàn)紋理貼圖。為了驗(yàn)證算法的通用性,針對(duì)自拍攝圖像,完成上述過(guò)程,實(shí)現(xiàn)了對(duì)自拍攝場(chǎng)景的有效恢復(fù)。對(duì)三維模型進(jìn)行誤差分析,結(jié)果表明,本文三維重構(gòu)算法可靠。
[Abstract]:The technology of 3D reconstruction based on binocular vision of uncalibrated camera is to use the principle of binocular vision measurement. Two cameras with unknown parameters in different positions or the same camera are rotated, and then the two images of the same scene are obtained by the translation of the same camera. Based on the parallax of the two images, the 3D model of the scene is built. The technology of 3D reconstruction based on binocular vision has become an important research method of 3D reconstruction. How to calibrate the camera correctly, improve the accuracy of image matching, estimate the basic matrix accurately, and solve the exact coordinates of 3D points in space. Firstly, the calibration method of the camera is introduced in detail. For the standard test image, the camera calibration method based on the vanishing point is used, and the Hough transform is used. The camera can be calibrated without knowing the motion attitude and related parameters of the camera, and the calibration accuracy is improved by using the Zhang's plane template calibration method. Comparing the performance of SIFT algorithm and SURF algorithm, considering the logarithm of the matching point in the case of low time requirement, the SIFT algorithm is used as the basis of further research. An improved method for eliminating mismatch is proposed. Some obvious mismatch points are eliminated before the foundation matrix is obtained, which improves the accuracy of solving the fundamental matrix. Then, an improved method for estimating the robustness of the basic matrix is proposed. Combined with the advantages of RANSAC algorithm and M-Estimator estimation algorithm, the method can estimate the basic matrix more accurately, use the estimation results to constrain the matching point pairs, eliminate the mismatched point pairs again, and improve the matching accuracy of the feature point pairs. The rotation matrix and the translation vector of the camera are restored by the method of motion recovery, and the 3D coordinates of the space point are solved according to the consistency of the depth of the scene, according to the standard test image, the rotation matrix and the translation vector of the camera are recovered by the method of motion recovery. The matching points that deviate seriously from the main scene are eliminated, the spatial 3D points are triangulated to express the geometric structure information of the scene more clearly, and the realistic 3D scene is restored. Texture mapping is realized by OpenGL. In order to verify the generality of the algorithm, the process of self-shooting image is completed, and the effective restoration of self-shooting scene is realized. The error analysis of 3D model shows that, The three-dimensional reconstruction algorithm in this paper is reliable.
【學(xué)位授予單位】:哈爾濱工業(yè)大學(xué)
【學(xué)位級(jí)別】:碩士
【學(xué)位授予年份】:2017
【分類號(hào)】:TP391.41
【參考文獻(xiàn)】
相關(guān)期刊論文 前10條
1 黃春燕;韓燮;韓慧妍;孫福盛;;一種改進(jìn)的基礎(chǔ)矩陣估計(jì)算法[J];小型微型計(jì)算機(jī)系統(tǒng);2014年11期
2 白廷柱;侯喜報(bào);;基于SIFT算子的圖像匹配算法研究[J];北京理工大學(xué)學(xué)報(bào);2013年06期
3 張東興;祝明波;鄒建武;李相平;;基于相似三角形的SIFT錯(cuò)誤匹配點(diǎn)剔除算法研究[J];計(jì)算機(jī)工程與科學(xué);2012年04期
4 陳敏;邵振峰;慎于藍(lán);;基于基準(zhǔn)點(diǎn)和基準(zhǔn)方向的SIFT誤匹配校正方法[J];測(cè)繪通報(bào);2012年03期
5 高峰;文貢堅(jiān);;利用仿射幾何的仿射不變特征提取方法[J];中國(guó)圖象圖形學(xué)報(bào);2011年03期
6 王繼陽(yáng);文貢堅(jiān);李德仁;;直線特征立體匹配中的不確定性問(wèn)題[J];信號(hào)處理;2010年05期
7 霍炬;楊衛(wèi);楊明;;基于消隱點(diǎn)幾何特性的攝像機(jī)自標(biāo)定方法[J];光學(xué)學(xué)報(bào);2010年02期
8 張潔玉;陳強(qiáng);劉復(fù)昌;夏德深;;一種改進(jìn)的M-Estimators基礎(chǔ)矩陣魯棒估計(jì)法[J];中國(guó)圖象圖形學(xué)報(bào);2009年08期
9 齊全;方漪;杜娜;劉文學(xué);;單幅圖像重構(gòu)中的滅點(diǎn)計(jì)算[J];青島大學(xué)學(xué)報(bào)(自然科學(xué)版);2008年01期
10 伏燕軍;楊坤濤;鄒文棟;何興道;;基于Levenberg-Marquardt算法的圖像拼接[J];激光雜志;2007年05期
相關(guān)博士學(xué)位論文 前4條
1 韓慧妍;基于雙目立體視覺的三維模型重建方法研究[D];中北大學(xué);2014年
2 施陳博;快速圖像配準(zhǔn)和高精度立體匹配算法研究[D];清華大學(xué);2011年
3 謝文寒;基于多像滅點(diǎn)進(jìn)行相機(jī)標(biāo)定的方法研究[D];武漢大學(xué);2004年
4 楊敏;多視幾何和基于未標(biāo)定圖像的三維重構(gòu)[D];南京航空航天大學(xué);2003年
相關(guān)碩士學(xué)位論文 前2條
1 郭紅玉;基于序列圖像的多目標(biāo)認(rèn)同技術(shù)研究[D];中北大學(xué);2008年
2 張耀;基礎(chǔ)矩陣計(jì)算及其在立體視差估計(jì)中的應(yīng)用[D];西安電子科技大學(xué);2008年
,本文編號(hào):1584033
本文鏈接:http://sikaile.net/kejilunwen/ruanjiangongchenglunwen/1584033.html