基于增強(qiáng)現(xiàn)實(shí)的手術(shù)導(dǎo)航圖像融合方法研究
本文選題:增強(qiáng)現(xiàn)實(shí) + 手術(shù)導(dǎo)航; 參考:《吉林大學(xué)》2015年碩士論文
【摘要】:應(yīng)用在手術(shù)導(dǎo)航系統(tǒng)中的基于增強(qiáng)現(xiàn)實(shí)的圖像融合技術(shù)改變了傳統(tǒng)的手術(shù)方式,不僅能減輕病人痛苦,有效的降低手術(shù)風(fēng)險(xiǎn),還能精確的制定術(shù)前規(guī)劃,在醫(yī)療培訓(xùn)方面也具有很高的應(yīng)用價(jià)值;谠鰪(qiáng)現(xiàn)實(shí)的圖像融合是將虛擬的病灶部位疊加在真實(shí)的手術(shù)部位,為醫(yī)生診斷提供更多的支撐。 本文研究了二維圖像的融合及三維圖像的融合可視化,通過(guò)對(duì)將要手術(shù)部位的外表面二維融合和內(nèi)部生理信息的三維融合,使病灶信息量增大,具體的研究?jī)?nèi)容和成果如下: 1.本文結(jié)合真實(shí)的手術(shù)過(guò)程對(duì)增強(qiáng)現(xiàn)實(shí)系統(tǒng)進(jìn)行了設(shè)計(jì),主要分成四個(gè)部分,包括虛擬顱骨模型的三維重建、增強(qiáng)現(xiàn)實(shí)系統(tǒng)的空間定位、真實(shí)手術(shù)場(chǎng)景的重構(gòu)和虛實(shí)融合顯示。采用基于Matlab工具箱的攝像機(jī)標(biāo)定方法,完成了系統(tǒng)的空間定位,為增強(qiáng)現(xiàn)實(shí)融合部分做了前續(xù)工作。 2.設(shè)計(jì)了易于識(shí)別、提取的藍(lán)色外加標(biāo)志點(diǎn),利用目標(biāo)窗口對(duì)顱骨定位剪切,提取藍(lán)色標(biāo)志點(diǎn),計(jì)算標(biāo)志點(diǎn)質(zhì)心坐標(biāo)并對(duì)質(zhì)心標(biāo)號(hào),利用基于Laplacian矩陣的圖像匹配方法對(duì)左右圖像進(jìn)行匹配,最后利用加權(quán)圖像融合方法和像素灰度值極大(小)圖像融合法完成左右顱骨圖像外表面二維融合。 3.研究了虛實(shí)配準(zhǔn)的原理、方法,提出了一種融合圖像顯示方法。利用三維掃描顱骨的數(shù)據(jù)重建虛擬顱骨模型,采用了對(duì)多角度圖片進(jìn)行三維場(chǎng)景恢復(fù)的方法得到真實(shí)手術(shù)場(chǎng)景,,最后把虛擬的顱骨模型導(dǎo)入到真實(shí)的手術(shù)場(chǎng)景空間去,顯示在計(jì)算機(jī)上。 本文在搭建的實(shí)驗(yàn)平臺(tái)的基礎(chǔ)上,對(duì)該系統(tǒng)進(jìn)行了實(shí)驗(yàn)研究。首先進(jìn)行雙目攝像機(jī)標(biāo)定,經(jīng)過(guò)實(shí)驗(yàn)分析,系統(tǒng)的標(biāo)定誤差在0.3個(gè)像素以內(nèi),達(dá)到了亞像素級(jí)的精度;然后對(duì)左右攝像機(jī)采集的圖像進(jìn)行Laplacian匹配和加權(quán)、像素灰度值極大(。﹫D像融合;最后將重建的虛擬模型導(dǎo)入到真實(shí)手術(shù)場(chǎng)景中并通過(guò)計(jì)算機(jī)屏幕進(jìn)行顯示。本文研究的階段性成果為后續(xù)動(dòng)態(tài)可視化研究打下了基礎(chǔ)。
[Abstract]:The image fusion technology based on augmented reality, which is applied in the surgical navigation system, has changed the traditional operation method. It can not only reduce the pain of patients, but also reduce the risk of surgery, and make the preoperative planning accurately. Also has the very high application value in the medical treatment training aspect. The image fusion based on augmented reality (AR) overlay the virtual lesion on the real surgical site, which provides more support for the doctor's diagnosis. In this paper, the fusion of two dimensional images and three dimensional images are studied. Through the fusion of the external surface of the site to be operated on and the three dimensional fusion of the internal physiological information, the amount of information of the focus is increased. The specific research contents and results are as follows: 1. In this paper, the augmented reality system is designed according to the actual operation process, which is divided into four parts, including the 3D reconstruction of virtual skull model, the spatial location of augmented reality system, the reconstruction of real operation scene and the display of virtual reality fusion. The camera calibration method based on Matlab toolbox is used to complete the spatial positioning of the system. The blue additional mark points, which are easy to recognize and extract, are designed. The target window is used to locate and cut the skull, the blue mark points are extracted, the centroid coordinates of the mark points are calculated, and the centroid of mass is labeled. The image matching method based on Laplacian matrix is used to match the left and right images, and the weighted image fusion method and the maximum (small) pixel gray value fusion method are used to complete the 2D fusion of the left and right skull images. The principle and method of virtual real registration are studied, and a fusion image display method is proposed. The virtual skull model is reconstructed by using the data of 3D scanning skull, and the real operation scene is obtained by using the method of 3D scene restoration of multi-angle images. Finally, the virtual skull model is imported into the real operation scene space. Display on the computer. On the basis of the experimental platform, the system is studied experimentally in this paper. The calibration error of the system is within 0.3 pixels, and the accuracy of sub-pixel level is achieved, then the images collected by the left and right cameras are matched and weighted by Laplacian. Finally, the reconstructed virtual model is imported into the real operation scene and displayed on the computer screen. The results of this paper lay a foundation for further dynamic visualization research.
【學(xué)位授予單位】:吉林大學(xué)
【學(xué)位級(jí)別】:碩士
【學(xué)位授予年份】:2015
【分類號(hào)】:R616;TP242;TP391.41
【參考文獻(xiàn)】
相關(guān)期刊論文 前9條
1 任延俊;郭霏;姜淑華;王文生;;基于徑向約束的CCD相機(jī)標(biāo)定參數(shù)的整體優(yōu)化[J];長(zhǎng)春理工大學(xué)學(xué)報(bào)(自然科學(xué)版);2009年02期
2 邊后琴;譚葉;;立體視覺(jué)系統(tǒng)的參數(shù)標(biāo)定的Matlab實(shí)現(xiàn)[J];上海電力學(xué)院學(xué)報(bào);2011年04期
3 嵇武;李寧;黎介壽;;我國(guó)機(jī)器人手術(shù)開(kāi)展的現(xiàn)狀與前景展望[J];腹腔鏡外科雜志;2011年02期
4 徐林;手術(shù)導(dǎo)航系統(tǒng)及其在骨科中的應(yīng)用[J];廣東醫(yī)學(xué);2005年02期
5 申晨;;計(jì)算機(jī)視覺(jué)中的增強(qiáng)現(xiàn)實(shí)技術(shù)[J];計(jì)算機(jī)光盤軟件與應(yīng)用;2012年19期
6 全紅艷;王長(zhǎng)波;林俊雋;;基于視覺(jué)的增強(qiáng)現(xiàn)實(shí)技術(shù)研究綜述[J];機(jī)器人;2008年04期
7 陳靖,王涌天,閆達(dá)遠(yuǎn);增強(qiáng)現(xiàn)實(shí)系統(tǒng)及其應(yīng)用[J];計(jì)算機(jī)工程與應(yīng)用;2001年15期
8 梁棟;童強(qiáng);王年;鮑文霞;屈磊;;一種基于Laplacian矩陣的圖像匹配算法[J];計(jì)算機(jī)工程與應(yīng)用;2005年36期
9 佟帥;徐曉剛;易成濤;邵承永;;基于視覺(jué)的三維重建技術(shù)綜述[J];計(jì)算機(jī)應(yīng)用研究;2011年07期
本文編號(hào):2038402
本文鏈接:http://sikaile.net/yixuelunwen/waikelunwen/2038402.html