基于邏輯回歸算法的復(fù)雜背景棉田冠層圖像自適應(yīng)閾值分割
發(fā)布時(shí)間:2018-03-30 00:38
本文選題:算法 切入點(diǎn):棉花 出處:《農(nóng)業(yè)工程學(xué)報(bào)》2017年12期
【摘要】:棉田冠層覆蓋度是監(jiān)測棉田棉花長勢的重要指標(biāo),針對(duì)棉田復(fù)雜環(huán)境中冠層圖像難以準(zhǔn)確分割的問題,該文提出了一種基于邏輯回歸算法的復(fù)雜背景棉田冠層圖像自適應(yīng)閾值分割方法。首先將棉田冠層圖像像素分成葉片冠層和地表背景2類,在HSV顏色空間中分別提取兩類像素的H通道值,在RGB顏色空間中分別提取綠色占比值(G/(G+R+B))作為顏色特征;再利用邏輯回歸算法確定出各顏色特征的分割閾值,通過H通道分割閾值實(shí)現(xiàn)圖像的初次分割;再對(duì)初次分割結(jié)果中的低亮像素使用邏輯回歸算法計(jì)算出的超綠特征閾值進(jìn)行低亮像素分割,同時(shí)采用綠色占比分割閾值對(duì)圖像高亮像素及低亮像素分割結(jié)果整體實(shí)現(xiàn)二次分割,最后采用形態(tài)學(xué)濾波方法對(duì)分割結(jié)果進(jìn)行優(yōu)化。為評(píng)價(jià)該分割方法,利用從新疆棉花產(chǎn)區(qū)采集到的320幅棉田冠層圖像進(jìn)行試驗(yàn)。結(jié)果表明,該方法可在棉田復(fù)雜自然背景下,有效分割出棉田冠層區(qū)域,平均相對(duì)目標(biāo)面積誤差率僅為5.46%,總體平均匹配率達(dá)到93.07%;優(yōu)于超綠特征OTSU分割方法(平均相對(duì)目標(biāo)面積誤差率11.78%,總體平均匹配率76.43%)、四分量分割方法(平均相對(duì)目標(biāo)面積誤差率24.11%,總體平均匹配率71.67%)、顯著性分割方法(平均相對(duì)目標(biāo)面積誤差率36.92%,總體平均匹配率66.92%)。該方法的平均處理時(shí)間為4.63 s,相對(duì)于超綠特征OTSU法(3.84 s)和四分量分割法(2.56 s),耗時(shí)多一些,但與顯著性分割法(6.25 s)對(duì)比,花費(fèi)時(shí)間要少。研究結(jié)果可為棉田自然復(fù)雜環(huán)境下機(jī)器視覺技術(shù)監(jiān)測棉花覆蓋度提供一種有效途徑。
[Abstract]:Canopy coverage is an important index to monitor cotton growth in cotton field. It is difficult to segment canopy image accurately in complex environment of cotton field. In this paper, an adaptive threshold segmentation method based on logical regression algorithm for complex background canopy image is proposed. Firstly, the pixels of cotton canopy image are divided into two categories: leaf canopy and surface background. The H channel values of two kinds of pixels are extracted in the HSV color space, and the green ratio G / G R B is extracted as the color feature in the RGB color space, and then the segmentation threshold of each color feature is determined by using the logical regression algorithm. The first segmentation of image is realized by H-channel segmentation threshold, and then the low-bright pixels in the initial segmentation result are segmented by using the super-green feature threshold calculated by the logic regression algorithm. At the same time, the green duty ratio segmentation threshold is used to realize the secondary segmentation of the high and low bright pixels. Finally, the morphological filtering method is used to optimize the segmentation results. Based on 320 canopy images collected from cotton production areas in Xinjiang, the results show that this method can effectively segment the canopy area of cotton field under the complex natural background of cotton field. The average relative target area error rate is only 5.46 and the overall average matching rate is 93.07, which is superior to the OTSU segmentation method with super-green features (the average relative target area error rate is 11.78 points, the overall average matching rate is 76.43 points), and the four-component segmentation method (average relative target) method. Area error rate 24.11%, overall average matching rate 71.67%, significant segmentation method (mean relative area error rate 36.92, total average matching rate 66.92%), average processing time of this method is 4.63 s, compared with ultra-green feature OTSU method (3.84 s) and four components. The segmentation method is 2. 56 sm, which takes more time. However, compared with the significant segmentation method (6.25 s), it takes less time. The results can provide an effective way for monitoring cotton coverage by machine vision under natural and complex environment.
【作者單位】: 寧夏大學(xué)信息工程學(xué)院;石河子大學(xué)信息科學(xué)與技術(shù)學(xué)院;新疆五家渠市農(nóng)業(yè)局;
【基金】:國家自然科學(xué)基金項(xiàng)目(31460317)
【分類號(hào)】:S562;TP391.41
【相似文獻(xiàn)】
相關(guān)期刊論文 前1條
1 陳樹越;許九紅;;基于光纖錐視覺的植物葉片脈絡(luò)提取研究[J];農(nóng)機(jī)化研究;2013年12期
相關(guān)會(huì)議論文 前3條
1 王歆s,
本文編號(hào):1683622
本文鏈接:http://sikaile.net/kejilunwen/ruanjiangongchenglunwen/1683622.html
最近更新
教材專著