基于骨架特征的奶牛肢體分解方法研究
本文選題:奶牛 + 肢體分解 ; 參考:《西北農(nóng)林科技大學(xué)》2017年碩士論文
【摘要】:我國養(yǎng)殖業(yè)逐步向規(guī);较虬l(fā)展,規(guī);B(yǎng)殖對飼養(yǎng)管理方式提出更嚴(yán)格的要求,在養(yǎng)殖過程中廣泛應(yīng)用信息技術(shù)以提高養(yǎng)殖效率和健康管理水平已成為必然趨勢。視頻分析技術(shù)能夠?qū)?dòng)物的行為進(jìn)行自動(dòng)監(jiān)測和理解,是提高養(yǎng)殖管理信息化水平的重要手段,且越來越多地應(yīng)用于奶牛的精細(xì)養(yǎng)殖。國內(nèi)外學(xué)者已經(jīng)在這方面做了大量研究工作,但多是針對奶牛的整體進(jìn)行分析,而奶牛作為一種多關(guān)節(jié)的大型動(dòng)物,頭部、脖子、前肢、后肢和尾巴等均是通過關(guān)節(jié)區(qū)分開的局部肢體,通過奶牛各個(gè)肢體部位可獲取更加精準(zhǔn)的奶牛運(yùn)動(dòng)細(xì)節(jié)信息,是奶牛姿態(tài)檢測、行為分析和理解的基礎(chǔ)。為實(shí)現(xiàn)奶牛頭部、脖子、軀干、前肢、后肢和尾巴的精確分解,本文研究并提出一種基于骨架特征的奶牛肢體分解方法。通過在奶牛場布設(shè)Kinect傳感器,獲取奶牛的深度圖像數(shù)據(jù),研究基于深度圖像的奶牛目標(biāo)提取方法、奶牛骨架提取方法及基于骨架特征的奶牛肢體分解方法。本文主要工作和結(jié)論如下:(1)提出了綜合利用深度閾值、圖像形態(tài)學(xué)及中值濾波的奶牛目標(biāo)提取方法。針對奶牛在養(yǎng)殖場中受復(fù)雜背景和光照影響而難于精確提取的問題,綜合設(shè)備的布設(shè)環(huán)境和性價(jià)比等因素,選擇Kinect獲取奶牛的深度圖像數(shù)據(jù),將Kinect獲取的深度數(shù)據(jù)轉(zhuǎn)換為文本數(shù)據(jù),利用深度圖像中的深度閾值分割及圖像形態(tài)學(xué)變換進(jìn)行奶牛目標(biāo)提取,并用中值濾波對圖像進(jìn)行去噪處理,從復(fù)雜背景中有效提取出目標(biāo)奶牛。試驗(yàn)結(jié)果表明,本文提取的奶牛目標(biāo)與人工提取奶牛的重疊率為95.62%。(2)借鑒Choi骨架寬度約束理論下的骨架點(diǎn)判定準(zhǔn)則,提取出奶牛骨架并進(jìn)行減枝處理。通過對目前主要骨架提取方式進(jìn)行分析,綜合考慮骨架的連通性、單像素性及高效性等因素,利用Choi定義的骨架寬度約束理論下的骨架點(diǎn)判定準(zhǔn)則提取奶牛骨架,用離散曲線演化模型對骨架進(jìn)行剪枝處理,簡化后的骨架能反映奶牛完整的輪廓特征。(3)提出基于骨架特征的奶牛肢體分解方法。該方法提取奶牛骨架上含有重要位置信息的骨架分叉點(diǎn),以骨架分叉點(diǎn)依據(jù)設(shè)定約束條件,生成奶牛肢體分解的分割線,并利用形狀視覺顯著度和分割線優(yōu)先級準(zhǔn)則對生成的分割線進(jìn)行優(yōu)化處理,實(shí)現(xiàn)了奶牛肢體的分解。試驗(yàn)結(jié)果表明,在顯著性閾值取2.5時(shí),奶牛各個(gè)肢體分解平均正確率為95.09%,且對較難分割的尾部正確率達(dá)95.51%;對仰頭、正常行走、微低頭和低頭體態(tài)下的肢體分解平均正確率分別為95.18%、95.00%、94.85%和96.23%,可實(shí)現(xiàn)不同體態(tài)奶牛的高精度分解。
[Abstract]:The aquaculture industry in our country is gradually developing towards a large scale. It has become an inevitable trend to apply information technology widely to improve the efficiency and the level of health management of aquaculture. Video analysis technology can automatically monitor and understand the behavior of animals, which is an important means to improve the level of information management of breeding, and is more and more used in the fine breeding of cows. Scholars at home and abroad have done a lot of research in this area, but most of them are based on the analysis of the whole of the cow, and the cow, as a large animal with multiple joints, has a head, neck, forelimb, Hindlimb and tail are local limbs separated by joint, and more accurate details of cow motion can be obtained from each limb of cow, which is the basis of posture detection, behavior analysis and understanding. In order to decompose the head, neck, trunk, forelimb, hind limb and tail accurately, a decomposition method based on skeleton features is proposed in this paper. By using Kinect sensor in dairy farm to obtain the depth image data of dairy cow, the methods of dairy cow target extraction based on depth image, cow skeleton extraction method and cow limb decomposition method based on skeleton feature are studied. The main work and conclusions of this paper are as follows: (1) A method of dairy cow target extraction using depth threshold, image morphology and median filter is proposed. In order to solve the problem that it is difficult to extract accurately the dairy cattle under the influence of complex background and illumination, the Kinect is selected to obtain the depth image data of dairy cow, because of such factors as the setting environment of the equipment and the ratio of performance to price. The depth data obtained by Kinect are converted into text data, and the depth threshold segmentation and morphological transformation of the depth image are used to extract the dairy cow target, and the median filter is used to Denoise the image. The target cows were extracted from complex background. The experimental results show that the overlap rate between the objective extraction and artificial extraction is 95.622. (2) drawing lessons from Choi's skeleton width constraint theory, the cow skeleton is extracted and treated with branch reduction. Based on the analysis of the main skeleton extraction methods and considering the connectivity, single pixel and efficiency of skeleton, the skeleton of dairy cattle was extracted by using the criterion of skeleton point decision based on Choi's definition of skeleton width constraint theory. The discrete curve evolution model is used to prune the skeleton, and the simplified skeleton can reflect the complete contour features of dairy cattle. (3) A decomposition method based on skeleton feature is proposed. In this method, the skeleton bifurcation points with important position information are extracted from the dairy cow skeleton, and the splitting lines of the decomposition of the cow limbs are generated by the skeleton bifurcation points according to the constraint conditions set by the skeleton bifurcation points. The shape visual saliency and the priority criterion of the split-line are used to optimize the generated split-line, and the decomposition of the cow limb is realized. The results showed that at the significant threshold of 2.5, the average correct rate of decomposition was 95.09 for each limb, and 95.51 for the difficult tail, and 95.51 for the head up. The average correct rate of limb decomposition was 95.18% and 96.23%, respectively, which could be used to decompose dairy cows with different posture.
【學(xué)位授予單位】:西北農(nóng)林科技大學(xué)
【學(xué)位級別】:碩士
【學(xué)位授予年份】:2017
【分類號(hào)】:S823;TP391.41
【參考文獻(xiàn)】
相關(guān)期刊論文 前10條
1 黃椰;黃靖;肖長詩;姜文;孫毅;;基于雙目立體視覺的船舶軌跡跟蹤算法研究[J];計(jì)算機(jī)科學(xué);2017年01期
2 葉卉;張為民;張歡;Jürgen Fleischer;;機(jī)器人智能抓取系統(tǒng)視覺模塊的研究與開發(fā)[J];組合機(jī)床與自動(dòng)化加工技術(shù);2016年12期
3 張作運(yùn);劉科征;王向強(qiáng);;基于Kinect V2動(dòng)作捕捉系統(tǒng)的設(shè)計(jì)與實(shí)現(xiàn)[J];廣東通信技術(shù);2016年10期
4 劉宗新;劉景鵬;;高精度遠(yuǎn)距離激光測距系統(tǒng)設(shè)計(jì)[J];光電技術(shù)應(yīng)用;2016年05期
5 伍紹佳;廖麗;;一種改進(jìn)的單目機(jī)器人立體視覺系統(tǒng)校正方法[J];計(jì)算機(jī)應(yīng)用與軟件;2016年08期
6 李詩銳;李琪;李海洋;侯沛宏;曹偉國;王向東;李華;;基于Kinect v2的實(shí)時(shí)精確三維重建系統(tǒng)[J];軟件學(xué)報(bào);2016年10期
7 高立青;王延章;;基于截線法的快速骨架提取算法[J];自動(dòng)化學(xué)報(bào);2016年07期
8 何東健;孟凡昌;趙凱旋;張昭;;基于視頻分析的犢牛基本行為識(shí)別[J];農(nóng)業(yè)機(jī)械學(xué)報(bào);2016年09期
9 楊興雨;蘇金善;王元慶;張冰清;沈略;;大視場線陣推掃激光3D成像雷達(dá)光束整形[J];光電工程;2016年04期
10 何東健;劉冬;趙凱旋;;精準(zhǔn)畜牧業(yè)中動(dòng)物信息智能感知與行為檢測研究進(jìn)展[J];農(nóng)業(yè)機(jī)械學(xué)報(bào);2016年05期
相關(guān)博士學(xué)位論文 前2條
1 趙艷娜;基于外觀特征的人體目標(biāo)再識(shí)別研究[D];山東大學(xué);2015年
2 劉海容;形狀的曲率表示與分解[D];華中科技大學(xué);2009年
相關(guān)碩士學(xué)位論文 前7條
1 楊淑德;基于奇異點(diǎn)和特征邊的網(wǎng)格模型分割算法研究[D];大連理工大學(xué);2016年
2 王順婷;基于改進(jìn)凸分解的手勢識(shí)別研究[D];杭州師范大學(xué);2016年
3 李浩;基于自適應(yīng)橢圓分塊和小波邊緣檢測的多豬目標(biāo)提取方法[D];江蘇大學(xué);2016年
4 馬源;基于雙目立體視覺的深度感知技術(shù)研究[D];北京理工大學(xué);2015年
5 趙旭;Kinect深度圖像修復(fù)技術(shù)研究[D];大連理工大學(xué);2013年
6 周穎;深度圖像的獲取及其處理[D];西安電子科技大學(xué);2008年
7 白翔;圖形識(shí)別中物體骨架化及相關(guān)問題的研究[D];華中科技大學(xué);2005年
,本文編號(hào):2113541
本文鏈接:http://sikaile.net/kejilunwen/ruanjiangongchenglunwen/2113541.html