基于深度學(xué)習(xí)的儲(chǔ)糧害蟲(chóng)檢測(cè)方法研究
本文選題:深度學(xué)習(xí) + 害蟲(chóng)檢測(cè) ; 參考:《河南工業(yè)大學(xué)》2017年碩士論文
【摘要】:我國(guó)是人口大國(guó),糧食生產(chǎn)大國(guó),也是糧食儲(chǔ)藏大國(guó)。在儲(chǔ)糧過(guò)程中,我國(guó)每年的糧食損失約為儲(chǔ)糧總量的0.2%至0.5%,其中由儲(chǔ)糧害蟲(chóng)危害帶來(lái)的損失為50%左右。因此,為了有效減少儲(chǔ)糧害蟲(chóng)所帶來(lái)的損失,糧蟲(chóng)防治已成為我國(guó)糧食安全保障的關(guān)鍵技術(shù)問(wèn)題,而糧食害蟲(chóng)的檢測(cè)與識(shí)別已成為糧蟲(chóng)防治的首要環(huán)節(jié)和關(guān)鍵問(wèn)題?v觀國(guó)內(nèi)外儲(chǔ)糧害蟲(chóng)的研究現(xiàn)狀,有扦樣、引誘、聲測(cè)、近紅外、圖像識(shí)別等多種方法進(jìn)行糧蟲(chóng)的檢測(cè)與分類(lèi),而圖像識(shí)別方法因其具有高識(shí)別率、操作簡(jiǎn)易、成本低廉等特點(diǎn)而成為糧蟲(chóng)防治領(lǐng)域的研究熱點(diǎn)和主要技術(shù)手段。傳統(tǒng)圖像識(shí)別的特征提取大多數(shù)是以人工方式進(jìn)行的,這種方法存在諸多局限性與不足。同時(shí)基于深度學(xué)習(xí)的圖像識(shí)別與分類(lèi)方法已成為國(guó)內(nèi)外的研究熱點(diǎn),深度學(xué)習(xí)通過(guò)仿生的方法,用人工神經(jīng)網(wǎng)絡(luò)來(lái)模擬人類(lèi)視覺(jué)系統(tǒng),以無(wú)監(jiān)督方式自動(dòng)學(xué)習(xí)圖像的特征,可顯著提高圖像識(shí)別的準(zhǔn)確率。本文針對(duì)儲(chǔ)糧害蟲(chóng)檢測(cè)問(wèn)題,探索基于深度學(xué)習(xí)的糧食害蟲(chóng)的檢測(cè)與識(shí)別方法。主要研究工作如下:1.對(duì)比人工神經(jīng)網(wǎng)絡(luò)、BP神經(jīng)網(wǎng)絡(luò)等淺層學(xué)習(xí)的方法,深入系統(tǒng)的研究了稀疏自動(dòng)編碼器、限制玻爾茲曼機(jī)、深度信念網(wǎng)絡(luò)、卷積神經(jīng)網(wǎng)絡(luò)等深度學(xué)習(xí)的方法,分析了卷積神經(jīng)網(wǎng)絡(luò)的模型、結(jié)構(gòu)、算法及應(yīng)用演變,為基于深度學(xué)習(xí)的糧蟲(chóng)圖像的檢測(cè)與識(shí)別提供了基礎(chǔ)。2.進(jìn)行甲蟲(chóng)類(lèi)(象甲總科、谷盜科)和蝶類(lèi)(花蝶和黑脈金斑蝶)糧蟲(chóng)數(shù)據(jù)的采集與相應(yīng)數(shù)據(jù)庫(kù)的制作。設(shè)計(jì)5層的卷積神經(jīng)網(wǎng)絡(luò)模型(1個(gè)輸入層,2個(gè)卷積層,2個(gè)全連接層),以Sigmoid作為激活函數(shù),以均方誤差(MES)作為損失函數(shù),進(jìn)行糧蟲(chóng)圖像的檢測(cè)與識(shí)別實(shí)驗(yàn)。根據(jù)實(shí)驗(yàn)結(jié)果,分析了基于5層的卷積神經(jīng)網(wǎng)絡(luò)模型的糧食害蟲(chóng)的檢測(cè)與識(shí)別方法所存在的問(wèn)題與不足。3.針對(duì)小樣本集訓(xùn)練的模型不具備泛化能力的問(wèn)題,本文提出了基于圖像扭曲技術(shù)的糧食害蟲(chóng)圖像樣本集構(gòu)造方法。通過(guò)圖像尺度變換、圖像旋轉(zhuǎn)、圖像彈性扭曲三種圖像增強(qiáng)技術(shù)實(shí)現(xiàn)糧蟲(chóng)圖像訓(xùn)練樣本集的構(gòu)造,實(shí)驗(yàn)表明,加入圖像扭曲技術(shù)的卷積神經(jīng)網(wǎng)絡(luò),通過(guò)訓(xùn)練所得到的模型具有更強(qiáng)的泛化能力,檢測(cè)與識(shí)別效果得到了顯著提高。4.針對(duì)淺層卷積神經(jīng)網(wǎng)絡(luò)訓(xùn)練的模型不具備復(fù)雜特征表達(dá)能力的問(wèn)題,本文提出了一種基于深度卷積神經(jīng)網(wǎng)絡(luò)模型的糧食害蟲(chóng)的檢測(cè)與識(shí)別方法。設(shè)計(jì)7層的卷積神經(jīng)網(wǎng)絡(luò)模型(1個(gè)輸入層,2個(gè)卷積層,2個(gè)池化層,2個(gè)全連接層),以ReLU作為激活函數(shù),以softmax+cross-entropy作為損失函數(shù),采用深度學(xué)習(xí)Caffe框架實(shí)現(xiàn)。糧蟲(chóng)圖像的檢測(cè)與識(shí)別實(shí)驗(yàn)表明,所提出方法在不增加訓(xùn)練代價(jià)的基礎(chǔ)上,顯著提高了復(fù)雜特征的獲取能力,甲蟲(chóng)類(lèi)的檢測(cè)分類(lèi)率高達(dá)95%,蝶類(lèi)的識(shí)別率也提高了20%。
[Abstract]:China is a populous country, a large country of grain production, and also a large country of grain storage. In the process of grain storage, the annual grain loss in China is about 0.2% to 0.5% of the total grain stored, and the loss caused by stored grain pests is about 50%. Therefore, in order to effectively reduce the losses caused by stored grain pests, the prevention and control of grain pests has become a key technical issue of grain security in China, and the detection and identification of grain pests has become the most important link and key problem in the control of grain pests. According to the research status of stored grain pests at home and abroad, there are many methods, such as sampling, luring, sound measuring, near infrared, image recognition and so on, to detect and classify grain insects. However, the image recognition method is easy to operate because of its high recognition rate. The characteristics of low cost have become the research hotspot and main technical means in the field of grain pest control. The traditional feature extraction of image recognition is mostly carried out manually, which has many limitations and shortcomings. At the same time, image recognition and classification method based on depth learning has become a hot topic at home and abroad. Through bionic method, depth learning simulates human visual system by artificial neural network, and automatically learns image features in unsupervised way. The accuracy of image recognition can be improved significantly. In this paper, the detection and identification method of grain pests based on deep learning is explored. The main research work is as follows: 1. Compared with the shallow learning methods such as artificial neural network (Ann) and BP neural network (Ann), the methods of deep learning, such as sparse automatic encoder, restricted Boltzmann machine, depth belief network and convolution neural network, are studied systematically. The model, structure, algorithm and application evolution of convolution neural network are analyzed, which provides the basis for the detection and recognition of grain insect image based on deep learning. In this paper, the data collection of beetles (Elephantae, Graconidae) and butterflies (flower butterflies and black-necked butterflies) were carried out, and the corresponding database was made. A five-layer convolution neural network model (one input layer, two convolution layers, two fully connected layers) was designed to detect and identify the grain insect images using Sigmoid as the activation function and MSE as the loss function. Based on the experimental results, the problems and shortcomings of the method of detection and identification of grain pests based on the five-layer convolution neural network model are analyzed. Aiming at the problem that the training model of small sample set does not have generalization ability, a method of constructing image sample set of grain pests based on image distortion technique is proposed in this paper. Three kinds of image enhancement techniques such as image scale transformation, image rotation and image elastic distortion are used to construct the training sample set of grain worm image. The experiment shows that the convolution neural network is added to the image distortion technology. The model obtained by training has stronger generalization ability, and the detection and recognition effect is improved significantly. 4. Aiming at the problem that the training model of shallow convolution neural network does not have the ability to express complex features, a method of detecting and identifying grain pests based on deep convolution neural network model is proposed in this paper. Seven layers of convolution neural network model (1 input layer, 2 convolution layer, 2 pool layer, 2 fully connected layer, ReLU as activation function, softmax cross-entropy as loss function and depth learning Caffe framework) are designed. The experiments on the detection and recognition of grain insect images show that the proposed method can significantly improve the ability to acquire complex features without increasing the training cost. The detection and classification rate of beetles is as high as 95 percent, and the recognition rate of butterflies is also increased by 20 percent.
【學(xué)位授予單位】:河南工業(yè)大學(xué)
【學(xué)位級(jí)別】:碩士
【學(xué)位授予年份】:2017
【分類(lèi)號(hào)】:S379.5;TP391.41
【參考文獻(xiàn)】
相關(guān)期刊論文 前10條
1 張紅濤;李芳;胡玉霞;張恒源;;倉(cāng)儲(chǔ)害蟲(chóng)局部形態(tài)學(xué)特征提取方法研究[J];河南農(nóng)業(yè)科學(xué);2014年02期
2 程小麗;武傳欣;劉俊明;遲瑩;魏慶偉;;儲(chǔ)糧害蟲(chóng)防治的研究進(jìn)展[J];糧食加工;2011年05期
3 萬(wàn)拯群;萬(wàn)平;;我國(guó)科學(xué)保糧若干問(wèn)題之我見(jiàn)[J];糧食儲(chǔ)藏;2009年04期
4 周龍;黃凌霄;牟懌;方明;;數(shù)學(xué)形態(tài)學(xué)方法在儲(chǔ)糧害蟲(chóng)圖像預(yù)處理中的應(yīng)用[J];華中科技大學(xué)學(xué)報(bào)(自然科學(xué)版);2008年02期
5 周龍;牟懌;;二維小波變換在糧蟲(chóng)圖像處理中的應(yīng)用[J];華中科技大學(xué)學(xué)報(bào)(自然科學(xué)版);2007年08期
6 甄彤;范艷峰;;基于支持向量機(jī)的儲(chǔ)糧害蟲(chóng)分類(lèi)識(shí)別技術(shù)研究[J];計(jì)算機(jī)工程;2006年09期
7 范艷峰,甄彤;谷物害蟲(chóng)檢測(cè)與分類(lèi)識(shí)別技術(shù)的研究及應(yīng)用[J];計(jì)算機(jī)工程;2005年12期
8 張紅梅,范艷峰,田耕;基于數(shù)字圖像處理技術(shù)的儲(chǔ)糧害蟲(chóng)分類(lèi)識(shí)別研究[J];河南工業(yè)大學(xué)學(xué)報(bào)(自然科學(xué)版);2005年01期
9 甄彤,范艷峰,鄒炳強(qiáng),唐建國(guó);谷物害蟲(chóng)圖像識(shí)別中特征值提取技術(shù)的研究[J];微電子學(xué)與計(jì)算機(jī);2004年12期
10 邱道尹,張成花,張紅濤,沈憲章,岳永娟;神經(jīng)網(wǎng)絡(luò)在儲(chǔ)糧害蟲(chóng)識(shí)別中的應(yīng)用[J];農(nóng)業(yè)工程學(xué)報(bào);2003年01期
相關(guān)碩士學(xué)位論文 前4條
1 張衛(wèi)芳;基于圖像處理的儲(chǔ)糧害蟲(chóng)檢測(cè)方法研究[D];陜西師范大學(xué);2010年
2 牟懌;基于圖像處理的儲(chǔ)糧害蟲(chóng)檢測(cè)方法研究[D];武漢工業(yè)學(xué)院;2009年
3 張紅濤;儲(chǔ)糧害蟲(chóng)圖像識(shí)別中的特征抽取研究[D];鄭州大學(xué);2002年
4 張成花;基于圖像識(shí)別的儲(chǔ)糧害蟲(chóng)分類(lèi)的研究[D];鄭州大學(xué);2002年
,本文編號(hào):1958968
本文鏈接:http://sikaile.net/shoufeilunwen/zaizhiyanjiusheng/1958968.html