天堂国产午夜亚洲专区-少妇人妻综合久久蜜臀-国产成人户外露出视频在线-国产91传媒一区二区三区

當(dāng)前位置:主頁 > 碩博論文 > 信息類博士論文 >

基于模糊模型和形狀特征的CT序列圖像分割方法研究

發(fā)布時(shí)間:2018-08-05 19:17
【摘要】:計(jì)算機(jī)層析成像(Computed Tomography,CT)作為一種先進(jìn)的檢測(cè)技術(shù),以圖像形式直觀清晰地反映被測(cè)對(duì)象內(nèi)部結(jié)構(gòu)和性狀,在醫(yī)學(xué)診斷和工業(yè)無損檢測(cè)領(lǐng)域的應(yīng)用十分廣泛。隨著CT技術(shù)發(fā)展和應(yīng)用需求的不斷提高,通過CT圖像處理技術(shù)實(shí)現(xiàn)對(duì)被測(cè)物體的定量化、自動(dòng)化分析和測(cè)量,以克服定性的主觀評(píng)價(jià)的不足是CT技術(shù)發(fā)展的重要方向之一。CT圖像分割是實(shí)現(xiàn)圖像量化分析、自動(dòng)識(shí)別和測(cè)量的關(guān)鍵和難點(diǎn)。本文以典型的CT序列三維圖像分割為研究內(nèi)容,針對(duì)醫(yī)學(xué)CT中一類邊界模糊的圖像分割難題和工業(yè)CT中一類體素尺度不同比的圖像分割難題開展研究,提出了一種基于物體隸屬度的醫(yī)學(xué)全身正電子發(fā)射層析成像/計(jì)算機(jī)層析成像(PET/CT)圖像自動(dòng)解剖結(jié)構(gòu)分割(AAR)方法和一種基于形狀特征的工業(yè)CT圖像裂縫分割方法。主要工作如下:1、提出了一種基于物體隸屬度的PET/CT圖像AAR方法。針對(duì)原有AAR方法在圖像質(zhì)量較好的診斷CT圖像解剖結(jié)構(gòu)分割中精度較高,但是在圖像質(zhì)量較差的PET圖像(解剖結(jié)構(gòu)較模糊)和低劑量CT圖像(對(duì)比度較低)解剖結(jié)構(gòu)分割中精度較低的問題,本文利用器官的灰度和紋理特征,提出了一種基于物體隸屬度的AAR方法。該方法在建模過程中,提出了結(jié)合訓(xùn)練圖像灰度和紋理信息的物體隸屬度函數(shù),用于估算各體素屬于物體的概率;在分割過程中,利用物體隸屬度函數(shù)得到測(cè)試圖像的物體隸屬度,再結(jié)合物體隸屬度對(duì)物體模型進(jìn)行初定位和閾值化最優(yōu)姿勢(shì)搜索,確定物體模型的最優(yōu)姿勢(shì),最后得到物體的空間分布結(jié)果;采用定位誤差和尺度誤差兩個(gè)指標(biāo),通過PET/CT圖像進(jìn)行了實(shí)驗(yàn)驗(yàn)證,結(jié)果表明改進(jìn)方法可實(shí)現(xiàn)更高精度的解剖結(jié)構(gòu)分割,平均定位誤差僅1-2體素,平均尺度誤差接近標(biāo)準(zhǔn)值1。2、改進(jìn)了AAR方法的最優(yōu)閾值訓(xùn)練方法。針對(duì)AAR方法中,原有最優(yōu)閾值訓(xùn)練方法搜索空間維度高、適應(yīng)性差、僅適用于灰度圖像的問題,本文利用超掩模和累計(jì)灰度直方圖,提出了改進(jìn)的最優(yōu)閾值訓(xùn)練方法。即在超掩模下計(jì)算目標(biāo)和背景的累計(jì)灰度直方圖,并在任意可能的閾值區(qū)間下求兩個(gè)直方圖面積絕對(duì)差,選取使絕對(duì)差值最大的閾值為最優(yōu)閾值。改進(jìn)方法將搜索空間從5維降為1維,實(shí)現(xiàn)了高效率的最優(yōu)閾值搜索,避免了因限制閾值搜索范圍而丟失可能的最優(yōu)閾值。實(shí)驗(yàn)結(jié)果表明,改進(jìn)方法能夠適用于灰度、紋理和隸屬度圖像,輸出合理的物體閾值以實(shí)現(xiàn)更精確的解剖結(jié)構(gòu)分割。3、改進(jìn)了AAR方法的層級(jí)結(jié)構(gòu)。原有AAR方法只適用于胸腹部等局部身體區(qū)域圖像,需要將全身圖像手動(dòng)地分割為局部身體區(qū)域。為了提高自動(dòng)化程度,本文利用全身各個(gè)器官之間的解剖結(jié)構(gòu)關(guān)系,提出了全身層級(jí)結(jié)構(gòu),即全身所有器官以一個(gè)樹狀層級(jí)結(jié)構(gòu)表示,按廣度優(yōu)先遍歷所有器官依次進(jìn)行建模和分割。通過全身軀干PET/CT圖像進(jìn)行了實(shí)驗(yàn)驗(yàn)證,結(jié)果表明改進(jìn)方法可實(shí)現(xiàn)全身軀干解剖結(jié)構(gòu)精確分割,提高了自動(dòng)化程度。4、提出了不同成像模式間的建模-初步分割方案。通常模型類方法中,成像模式內(nèi)的建模-初步分割需要來自同一種成像模式的訓(xùn)練數(shù)據(jù),未考慮建立通用于各種成像模式的快速原型的可能性。本論文利用模糊模型包含物體形狀和空間位置信息而獨(dú)立于成像模式的優(yōu)點(diǎn),在AAR方法建模和初步分割兩個(gè)基本步驟上,提出了成像模式間的建模-初步分割方案。通過實(shí)驗(yàn)驗(yàn)證了用診斷CT圖像建立的模糊模型在PET、低劑量CT和它們的物體隸屬度圖像上進(jìn)行解剖結(jié)構(gòu)分割的可行性。這為建立適用于各種成像模式的快速物體原型提供了一種有效途徑。5、提出了一種基于形狀特征的工業(yè)CT序列圖像裂縫分割方法。被測(cè)工件內(nèi)部的裂縫檢測(cè)、自動(dòng)顯示與測(cè)量是工業(yè)CT需要解決的難點(diǎn)之一,而圖像分割是關(guān)鍵。在工業(yè)CT系統(tǒng)中,獲得的三維圖像大多由序列斷層圖像組成,圖像中的體素在斷層平面內(nèi)的等效尺寸當(dāng)量與垂直于斷層方向的等效尺寸當(dāng)量存在很大差別,有時(shí)達(dá)到10倍以上,加之工業(yè)CT圖像的各種偽影較嚴(yán)重,這都給裂縫分割及定量測(cè)量增加了難度。針對(duì)這一問題,本文研究了適用于體素尺寸各向不同性的工業(yè)CT序列圖像的裂縫自動(dòng)分割方法:首先采用基于Hessian矩陣的二維線狀結(jié)構(gòu)濾波增強(qiáng)圖像線狀區(qū)域;在此基礎(chǔ)上,進(jìn)一步提出了結(jié)合層間灰度和方向的連續(xù)度以及層內(nèi)線狀鄰域灰度平均值的二維直方圖,以抑制偽影對(duì)裂縫分割的影響;根據(jù)直方圖的最大類熵確定閾值區(qū)間,得到裂縫的二值化分割結(jié)果。最后采用準(zhǔn)確率、查全率和F1值三個(gè)指標(biāo),通過實(shí)際工件的工業(yè)CT序列圖像進(jìn)行了實(shí)驗(yàn)驗(yàn)證,結(jié)果表明本文提出的方法不僅與其它四種常用的方法相比較,能夠得到更完整、更準(zhǔn)確的分割結(jié)果,滿足實(shí)際工業(yè)CT序列圖像裂縫分割精度的要求,而且自動(dòng)化程度更高。
[Abstract]:Computed Tomography (CT), as an advanced detection technology, reflects the internal structure and character of the object clearly and clearly in the form of image. It is widely used in the field of medical diagnosis and industrial nondestructive testing. With the development and application of CT technology, the application of CT image processing technology is realized. Quantitative, automated analysis and measurement of the measured objects to overcome the lack of qualitative subjective evaluation is one of the important directions of the development of CT technology..CT image segmentation is the key and difficult point to realize image quantization analysis, automatic recognition and measurement. This paper takes a typical CT sequence three-dimensional image segmentation as the research content, aiming at a class of boundary in medical CT The difficult problem of blurring image segmentation and the image segmentation problem of a kind of voxel scale in industrial CT is studied. A kind of automatic anatomical structure segmentation (AAR) method for medical systemic positron emission tomography / computerized tomography (PET/CT) image based on object membership and an industrial CT image crack based on shape feature are proposed. The main work of slit segmentation. The main work is as follows: 1, a PET/CT image AAR method based on object membership is proposed. In view of the original AAR method, the accuracy is higher in the segmentation of the anatomical structure of the diagnostic CT image with better image quality, but the PET image with poor image quality (the dissected structure is more obscure) and the low dose CT image (lower contrast) In this paper, a AAR method based on object membership is proposed in this paper, based on the gray and texture features of organs. In the process of modeling, the membership degree function of the object, which combines the gray and texture information of the training image, is proposed to estimate the probability of each body element belonging to the body, and the object is used in the segmentation process. The subjection function of body membership function obtains the membership degree of the object in the test image, and then combines the membership degree of the object to the initial position of the object model and the optimal position searching of the threshold, determines the optimal position of the object model, and finally obtains the result of the space distribution of the object, and uses the two indexes of the positioning error and the scale error, and the experimental verification is carried out through the PET/CT image. The results show that the improved method can achieve more accurate anatomical structure segmentation, the average positioning error is only 1-2 voxel, the average scale error is close to the standard value 1.2, and the optimal threshold training method of the AAR method is improved. In the AAR method, the original optimal threshold training method has high spatial dimension and poor adaptability, which is only applicable to the problem of gray image. In this paper, an improved optimal threshold training method is proposed by using the hyper mask and cumulative gray histogram. That is, the cumulative gray histogram of the target and the background is calculated under the super mask, and the absolute difference of the two histogram areas is calculated under any possible threshold range, and the optimal threshold is selected to make the maximum absolute difference as the optimal threshold. The improved method will search the search method. The space is reduced from 5 dimension to 1 dimension, and the optimal threshold search is achieved with high efficiency. It avoids the loss of possible optimal threshold by limiting the threshold search range. The experimental results show that the improved method can be applied to gray, texture and membership images, output reasonable object threshold to achieve more accurate anatomical structure segmentation.3, and improve the AAR method. The original AAR method is only suitable for local body area images such as chest and abdomen. It is necessary to manually divide the whole body image into a local body area. In order to improve the degree of automation, this paper uses the anatomical structure of various organs of the body to put forward a whole body structure, that is, the whole body is a tree structure with a tree structure. It is shown that all organs are modeled and segmented in sequence according to the breadth priority. The experimental verification through the PET/CT image of the whole body shows that the improved method can realize the precise segmentation of the body anatomy structure and improve the automation degree.4. The modeling and preliminary segmentation scheme between different imaging modes is proposed. The model class method is usually used. In the imaging mode, modeling - preliminary segmentation requires training data from the same imaging mode, without considering the possibility of building fast prototypes that are commonly used in various imaging modes. This paper uses the fuzzy model to contain the advantages of object shape and spatial location information, which is independent of the imaging mode, modeling and segmenting two preliminary methods in the AAR method. In the basic step, the modeling and preliminary segmentation scheme between imaging modes is proposed. The feasibility of dissecting the anatomical structure of the fuzzy model established by the diagnostic CT image on PET, low dose CT and their object membership image is verified by experiments. This provides a kind of rapid object prototype for various imaging modes. .5, an industrial CT sequence image segmentation method based on shape features is proposed. The detection of cracks inside the workpiece, automatic display and measurement is one of the difficulties that industrial CT needs to solve, and image segmentation is the key. In industrial CT system, the obtained 3D images are mostly composed of sequence fault images and the body in the image. The equivalent size equivalent in the plane of the fault and the equivalent size equivalent perpendicular to the fault direction are very different, sometimes more than 10 times, and the various artifacts of the industrial CT image are more serious. This adds to the difficulty of the fracture segmentation and quantitative measurement. In this paper, the application of the voxel dimension anisotropy is studied in this paper. The method of automatic segmentation of cracks in industrial CT sequence images: first, the two-dimensional linear structure filtering based on Hessian matrix is used to enhance the linear region of the image. On this basis, a two-dimensional histogram which combines the continuity of the interlayer gray scale and direction and the gray mean value of the inner line neighborhood is further proposed to suppress the segmentation of the artifacts. According to the maximum class entropy of the histogram, the threshold interval is determined and the two value segmentation results are obtained. Finally, the accuracy rate, the recall rate and the F1 value are used to verify the experimental results through the industrial CT sequence images of the actual workpiece. The results show that the proposed method is not only compared with the other four commonly used methods, and can be obtained. More complete and more accurate segmentation results meet the actual industrial CT sequence image crack segmentation accuracy requirements, and the degree of automation is higher.
【學(xué)位授予單位】:重慶大學(xué)
【學(xué)位級(jí)別】:博士
【學(xué)位授予年份】:2016
【分類號(hào)】:TP391.41

【相似文獻(xiàn)】

相關(guān)期刊論文 前10條

1 林三虎,孫繼業(yè),趙亦工;多用途實(shí)時(shí)序列圖像場(chǎng)景產(chǎn)生器[J];紅外與毫米波學(xué)報(bào);2002年05期

2 侍術(shù)干;費(fèi)樹岷;翟軍勇;;序列圖像背景重建方法[J];南京航空航天大學(xué)學(xué)報(bào);2006年S1期

3 高浩軍;杜宇人;;基于視頻序列圖像的車輛測(cè)速研究[J];電子測(cè)量技術(shù);2007年02期

4 楊延光;周智敏;宋千;王玉明;金添;;一種VFGPIR序列圖像特征評(píng)估與選擇新方法[J];信號(hào)處理;2009年10期

5 姚美紅;程國治;于京諾;;一種基于序列圖像的車輛避撞預(yù)警算法[J];拖拉機(jī)與農(nóng)用運(yùn)輸車;2010年05期

6 趙宇明,李介谷;序列圖像目標(biāo)跟蹤的神經(jīng)網(wǎng)絡(luò)算法[J];上海交通大學(xué)學(xué)報(bào);1995年06期

7 孫華燕,周道炳,李生良;一種序列圖像的拼接方法[J];光學(xué)精密工程;2000年01期

8 韓廣良,顧海軍,宋建中,董學(xué)志;基于實(shí)時(shí)序列圖像復(fù)雜背景下運(yùn)動(dòng)目標(biāo)的提取[J];吉林大學(xué)學(xué)報(bào)(信息科學(xué)版);2003年S1期

9 邱志強(qiáng),陸宏偉,張小虎,于起峰;基于因子分解和光束法平差從航空序列圖像恢復(fù)三維結(jié)構(gòu)[J];國防科技大學(xué)學(xué)報(bào);2004年04期

10 鄭軍,張偉,馬兆瑞,施克仁,潘際鑾;基于序列圖像時(shí)間穩(wěn)定性特征的背景估計(jì)技術(shù)[J];清華大學(xué)學(xué)報(bào)(自然科學(xué)版);2005年05期

相關(guān)會(huì)議論文 前10條

1 倪軍;袁家虎;吳欽章;;序列圖像中跟蹤目標(biāo)的一種簡(jiǎn)單算法[A];2004全國圖像傳感器技術(shù)學(xué)術(shù)交流會(huì)議論文集[C];2004年

2 李斌;胡秋平;常增田;;一種基于紅外序列圖像的微弱光信號(hào)時(shí)刻測(cè)試方法[A];2011西部光子學(xué)學(xué)術(shù)會(huì)議論文摘要集[C];2011年

3 蔣明;;基于序列圖像的魚類機(jī)動(dòng)性分析系統(tǒng)[A];蘇州市自然科學(xué)優(yōu)秀學(xué)術(shù)論文匯編(2008-2009)[C];2010年

4 鄭世友;付主木;費(fèi)樹岷;龍飛;;基于類四叉樹結(jié)構(gòu)的序列圖像運(yùn)動(dòng)估計(jì)[A];2005年中國智能自動(dòng)化會(huì)議論文集[C];2005年

5 王丹;姜小光;;利用時(shí)間序列圖像的傅立葉分析重構(gòu)中國地區(qū)無云NDVI圖像[A];中國空間科學(xué)學(xué)會(huì)空間探測(cè)專業(yè)委員會(huì)第十七次學(xué)術(shù)會(huì)議論文集[C];2004年

6 高原;胡紹海;;復(fù)雜海平面下序列圖像的快速檢測(cè)[A];第十三屆全國信號(hào)處理學(xué)術(shù)年會(huì)(CCSP-2007)論文集[C];2007年

7 姜長勝;王迪;葛慶平;張存林;趙艷紅;;紅外熱波序列圖像的顯示與增強(qiáng)方法[A];光電技術(shù)與系統(tǒng)文選——中國光學(xué)學(xué)會(huì)光電技術(shù)專業(yè)委員會(huì)成立二十周年暨第十一屆全國光電技術(shù)與系統(tǒng)學(xué)術(shù)會(huì)議論文集[C];2005年

8 王書民;張愛武;崔營營;張珍梅;;基于無人飛艇數(shù)字?jǐn)z影測(cè)量系統(tǒng)及其航拍序列圖像拼接[A];2009全國測(cè)繪科技信息交流會(huì)暨首屆測(cè)繪博客征文頒獎(jiǎng)?wù)撐募痆C];2009年

9 彭豐平;鮑蘇蘇;;基于CT序列圖像肝臟及其管道的分割[A];2008'中國信息技術(shù)與應(yīng)用學(xué)術(shù)論壇論文集(一)[C];2008年

10 施燦輝;劉曉平;吳宜燦;;基于VTK的醫(yī)學(xué)序列圖像可視化[A];全國第十五屆計(jì)算機(jī)科學(xué)與技術(shù)應(yīng)用學(xué)術(shù)會(huì)議論文集[C];2003年

相關(guān)博士學(xué)位論文 前10條

1 王慧倩;基于模糊模型和形狀特征的CT序列圖像分割方法研究[D];重慶大學(xué);2016年

2 王博;紅外序列圖像中運(yùn)動(dòng)弱小目標(biāo)時(shí)域檢測(cè)方法[D];西安電子科技大學(xué);2010年

3 陳穎;序列圖像中微弱點(diǎn)狀運(yùn)動(dòng)目標(biāo)檢測(cè)及跟蹤技術(shù)研究[D];電子科技大學(xué);2003年

4 張鵬林;復(fù)雜場(chǎng)景視頻序列圖像運(yùn)動(dòng)物體提取方法研究[D];武漢大學(xué);2005年

5 梁燕;基于小波變換的序列圖像感興趣區(qū)域編碼[D];天津大學(xué);2005年

6 彭科舉;基于序列圖像的三維重建算法研究[D];國防科學(xué)技術(shù)大學(xué);2012年

7 戴淵明;視頻序列圖像中目標(biāo)跟蹤技術(shù)研究[D];浙江大學(xué);2012年

8 祁艷杰;復(fù)雜結(jié)構(gòu)件的X射線序列圖像融合技術(shù)研究[D];中北大學(xué);2015年

9 李立春;基于無人機(jī)序列成像的地形重建及其在導(dǎo)航中的應(yīng)用研究[D];國防科學(xué)技術(shù)大學(xué);2009年

10 趙立興;基于視頻序列的運(yùn)動(dòng)目標(biāo)濾波、分割與檢測(cè)算法研究[D];燕山大學(xué);2013年

相關(guān)碩士學(xué)位論文 前10條

1 江耿紅;多源序列圖像的配準(zhǔn)與融合技術(shù)研究[D];山東大學(xué);2015年

2 胡志發(fā);腎臟CT序列圖像分割方法研究[D];哈爾濱工業(yè)大學(xué);2015年

3 邵昊陽;基于視覺顯著性的乳腺超聲序列圖像分割方法[D];哈爾濱工業(yè)大學(xué);2015年

4 楊艷靜;序列圖像去霧技術(shù)的研究[D];電子科技大學(xué);2014年

5 付達(dá);基于序列圖像的空中目標(biāo)檢測(cè)及識(shí)別方法研究[D];西安電子科技大學(xué);2014年

6 余清洲;航拍序列圖像自動(dòng)拼接技術(shù)的研究[D];集美大學(xué);2015年

7 朱士偉;序列圖像三維重建關(guān)鍵技術(shù)研究[D];東南大學(xué);2015年

8 焦孟君;視頻序列圖像中運(yùn)動(dòng)目標(biāo)檢測(cè)與跟蹤算法研究[D];燕山大學(xué);2016年

9 張揚(yáng);跟蹤系統(tǒng)目標(biāo)檢測(cè)技術(shù)研究[D];西安工業(yè)大學(xué);2016年

10 揭昭斌;基于序列圖像的空間非合作目標(biāo)三維重建方法與精度分析[D];哈爾濱工業(yè)大學(xué);2016年

,

本文編號(hào):2166739

資料下載
論文發(fā)表

本文鏈接:http://sikaile.net/shoufeilunwen/xxkjbs/2166739.html


Copyright(c)文論論文網(wǎng)All Rights Reserved | 網(wǎng)站地圖 |

版權(quán)申明:資料由用戶88216***提供,本站僅收錄摘要或目錄,作者需要?jiǎng)h除請(qǐng)E-mail郵箱bigeng88@qq.com