基于卷積神經(jīng)網(wǎng)絡(luò)的宮頸細(xì)胞病變圖像識(shí)別研究
本文關(guān)鍵詞: 宮頸細(xì)胞病變識(shí)別 卷積神經(jīng)網(wǎng)絡(luò) 網(wǎng)絡(luò)分類識(shí)別性能 樣本擴(kuò)容 BN算法 出處:《廣西師范大學(xué)》2017年碩士論文 論文類型:學(xué)位論文
【摘要】:目前傳統(tǒng)的宮頸細(xì)胞識(shí)別主要都是先經(jīng)過細(xì)胞圖像分割,人工設(shè)計(jì)算子選取特征,然后選用分類器進(jìn)行識(shí)別。在宮頸細(xì)胞分割與特征提取階段,使用此類方法需要掌握一定的病理醫(yī)學(xué)常識(shí),而且由于特征是人為選取,有時(shí)候選取的特征并不具有代表性,會(huì)導(dǎo)致識(shí)別效果不明顯,因此本文將深度學(xué)習(xí)框架下的卷積神經(jīng)網(wǎng)絡(luò)應(yīng)用于宮頸細(xì)胞識(shí)別的領(lǐng)域進(jìn)行研究。卷積神經(jīng)網(wǎng)絡(luò)是將人工神經(jīng)網(wǎng)絡(luò)和深度學(xué)習(xí)相結(jié)合的一種新型人工神經(jīng)網(wǎng)絡(luò),能夠?qū)⑻卣魈崛∨c識(shí)別分類工作相結(jié)合,其最主要的特點(diǎn)是局部感受野、權(quán)值共享和空間子采樣,能夠提取數(shù)據(jù)的局部特征,因此在圖像識(shí)別領(lǐng)域獲得了廣泛的應(yīng)用。本文將卷積神經(jīng)網(wǎng)絡(luò)模型應(yīng)用到宮頸細(xì)胞圖像識(shí)別中,本文的方案具有圖像可以直接輸入,特征自主提取的特點(diǎn),可以提高宮頸細(xì)胞圖像識(shí)別的智能化水平與效率。本文完成的主要研究工作如下:(1)本文詳細(xì)闡述了卷積神經(jīng)網(wǎng)絡(luò)的理論、特點(diǎn)和結(jié)構(gòu),為模型的改進(jìn)提供理論基礎(chǔ)。本文在LeNet-5模型的基礎(chǔ)上,構(gòu)造了若干個(gè)具有不同的層間連接方式的抽取特征的濾波器層的卷積神網(wǎng)絡(luò)模型,并將這些模型應(yīng)用到宮頸細(xì)胞圖像的識(shí)別中,通過仿真實(shí)驗(yàn)比較各個(gè)模型的分類效果,分析了不同數(shù)量的過濾器對(duì)網(wǎng)絡(luò)性能的影響。(2)在上文研究的基礎(chǔ)上繼續(xù)探究影響網(wǎng)絡(luò)識(shí)別性能的因素,通過調(diào)整卷積神經(jīng)網(wǎng)絡(luò)的卷積核尺寸、下采樣方法、激活函數(shù)以及擴(kuò)增圖像數(shù)據(jù)集來進(jìn)行對(duì)比仿真實(shí)驗(yàn)。仿真結(jié)果表明,合理的參數(shù)及方法選擇都會(huì)提高網(wǎng)絡(luò)的分類識(shí)別性能,尤其是增加圖像數(shù)據(jù)集對(duì)網(wǎng)絡(luò)性能提升效果明顯。(3)經(jīng)過分析了卷積神經(jīng)網(wǎng)絡(luò)識(shí)別分類性能的影響因素之后,總結(jié)了合理選擇參數(shù)以及方法的規(guī)律,構(gòu)造了一個(gè)宮頸細(xì)胞圖像分類識(shí)別性能最佳的網(wǎng)絡(luò)結(jié)構(gòu)。本文構(gòu)造了一個(gè)增加卷積層過濾器數(shù)量的網(wǎng)絡(luò),并加入了 BN算法作為BN層,BN算法能夠加快網(wǎng)絡(luò)訓(xùn)練速度與網(wǎng)絡(luò)收斂速度,然后加入dropout方法,隨機(jī)抑制網(wǎng)絡(luò)中的神經(jīng)元,最后使用softmax作為分類器,對(duì)宮頸細(xì)胞進(jìn)行病變分類識(shí)別。仿真實(shí)驗(yàn)結(jié)果表明:本文構(gòu)建的改進(jìn)卷積神經(jīng)網(wǎng)絡(luò)對(duì)宮頸細(xì)胞圖像二分類識(shí)別率達(dá)到98.36%,識(shí)別效果優(yōu)于ANN方法、SVM方法、KNN方法、貝葉斯方法和線性判別方法等多種方法,識(shí)別率比傳統(tǒng)貝葉斯方法提高了 12.21%,比人工神經(jīng)網(wǎng)絡(luò)方法(ANN)提高了5.65%,具有一定的實(shí)用價(jià)值。
[Abstract]:At present, the traditional cervical cell recognition is mainly through cell image segmentation, artificial design operator to select features, and then select classifier for recognition. In the cervical cell segmentation and feature extraction stage. The use of this method requires a certain degree of common sense of pathology medicine, and because the feature is artificial selection, sometimes the selected features are not representative, which will lead to the recognition effect is not obvious. In this paper, the convolution neural network in the framework of deep learning is applied to the field of cervical cell recognition. Convolution neural network is a new type of artificial neural network which combines artificial neural network and depth learning. Feature extraction and classification can be combined, the most important characteristics of the local receptive field, weight sharing and spatial subsampling, can extract the local features of the data. Therefore, it has been widely used in the field of image recognition. In this paper, the convolution neural network model is applied to cervical cell image recognition. It can improve the intelligent level and efficiency of cervical cell image recognition. The main research work accomplished in this paper is as follows: 1) the theory, characteristics and structure of convolution neural network are described in detail in this paper. On the basis of LeNet-5 model, this paper constructs several convolutional network models of filter layer with different interlayer connection characteristics. These models are applied to the recognition of cervical cell images, and the classification effects of each model are compared by simulation experiments. This paper analyzes the influence of different number of filters on network performance. (2) on the basis of the above research, we continue to explore the factors that affect the network identification performance, and adjust the convolution kernel size of the convolution neural network. The simulation results show that reasonable selection of parameters and methods can improve the classification and recognition performance of the network. In particular, adding image data sets to improve the network performance is obvious. After analyzing the factors affecting the classification performance of convolution neural networks, the reasonable selection of parameters and the rules of the method are summarized. A network structure with the best performance of classification and recognition of cervical cell images is constructed. A network to increase the number of convolutional layer filters is constructed and BN algorithm is added as the BN layer. BN algorithm can speed up the network training speed and network convergence speed, then add dropout method, randomly suppress the neurons in the network, and finally use softmax as classifier. The result of simulation experiment shows that the improved convolution neural network is better than ANN method in the recognition rate of cervical cell image. The recognition rate of SVM method is 12.21% higher than that of traditional Bayesian method, such as Bayesian method, Bayesian method and linear discriminant method. Compared with the artificial neural network (Ann) method, it is 5.65% higher than the artificial neural network (Ann) method, and has certain practical value.
【學(xué)位授予單位】:廣西師范大學(xué)
【學(xué)位級(jí)別】:碩士
【學(xué)位授予年份】:2017
【分類號(hào)】:R737.33;TP391.41;TP183
【參考文獻(xiàn)】
相關(guān)期刊論文 前6條
1 關(guān)濤;周東翔;劉云輝;蔡宣平;;基于自適應(yīng)閾值分割的宮頸細(xì)胞圖像分類算法[J];信號(hào)處理;2012年09期
2 顧佳玲;彭宏京;;增長(zhǎng)式卷積神經(jīng)網(wǎng)絡(luò)及其在人臉檢測(cè)中的應(yīng)用[J];系統(tǒng)仿真學(xué)報(bào);2009年08期
3 何苗;全宇;李建華;付志民;周寶森;;MLP神經(jīng)網(wǎng)絡(luò)在子宮頸細(xì)胞圖像識(shí)別中的應(yīng)用[J];中國(guó)衛(wèi)生統(tǒng)計(jì);2006年04期
4 何苗;蔣本鐵;李建華;付志民;范玉;周寶森;;徑向基人工神經(jīng)網(wǎng)絡(luò)在宮頸細(xì)胞圖像識(shí)別中的應(yīng)用[J];中國(guó)醫(yī)科大學(xué)學(xué)報(bào);2006年01期
5 李曉紅,黃在菊,王澤華;2198例液基細(xì)胞學(xué)宮頸癌篩查的臨床分析[J];中國(guó)婦幼保健;2005年18期
6 李光,張海峰,王軍梅,徐妙生,王全紅;宮頸鱗狀細(xì)胞癌細(xì)胞核的形態(tài)定量分析[J];山西醫(yī)科大學(xué)學(xué)報(bào);2005年04期
,本文編號(hào):1446152
本文鏈接:http://sikaile.net/yixuelunwen/fuchankeerkelunwen/1446152.html