結(jié)合相關(guān)濾波和卷積神經(jīng)網(wǎng)絡(luò)的目標(biāo)跟蹤方法研究
本文選題:特征提取 + 卷積神經(jīng)網(wǎng)絡(luò); 參考:《西北農(nóng)林科技大學(xué)》2017年碩士論文
【摘要】:目標(biāo)跟蹤算法是計(jì)算機(jī)視覺(jué)中用于處理視頻圖像信息的重要技術(shù)手段。在目標(biāo)跟蹤的過(guò)程中要面對(duì)目標(biāo)快速移動(dòng)、背景復(fù)雜、遮擋、光照變化等不定因素的挑戰(zhàn),對(duì)跟蹤算法魯棒性要求高。近來(lái),將深度學(xué)習(xí)與目標(biāo)跟蹤算法結(jié)合成為目標(biāo)跟蹤研究領(lǐng)域的熱點(diǎn)。本文把卷積神經(jīng)網(wǎng)絡(luò)與相關(guān)濾波算法結(jié)合作為研究對(duì)象。通過(guò)對(duì)卷積神經(jīng)網(wǎng)絡(luò)分層研究,深入的分析各層結(jié)構(gòu)的特征性質(zhì)。為了滿足跟蹤算法的實(shí)時(shí)性,本文對(duì)原VGG-Net網(wǎng)絡(luò)結(jié)構(gòu)進(jìn)行調(diào)整,使用改進(jìn)后的卷積神經(jīng)網(wǎng)絡(luò)對(duì)圖像目標(biāo)進(jìn)行特征提取。同時(shí),針對(duì)跟蹤算法的尺度自適應(yīng)及目標(biāo)模型更新進(jìn)行了改進(jìn)。本文的主要內(nèi)容如下所示:(1)領(lǐng)域自適應(yīng)卷積神經(jīng)網(wǎng)絡(luò)特征提取。通過(guò)對(duì)卷積神經(jīng)網(wǎng)絡(luò)提取的特征進(jìn)行分析研究,將深度特征與核相關(guān)濾波跟蹤算法進(jìn)行結(jié)合。本研究采用結(jié)構(gòu)調(diào)整后的VGG-Net對(duì)目標(biāo)樣本進(jìn)行特征提取,卷積神經(jīng)網(wǎng)絡(luò)中靠前的卷積層提取到的特征保留較多的空間信息,靠后的卷積層提取的特征具有目標(biāo)語(yǔ)義信息。算法中使用三層的卷積特征訓(xùn)練濾波器,采用由粗到細(xì)的方法,將三個(gè)濾波器跟蹤結(jié)果進(jìn)行綜合,實(shí)現(xiàn)對(duì)跟蹤目標(biāo)的精確定位。(2)模型自適應(yīng)更新策略改進(jìn)。大部分目標(biāo)跟蹤算法中,每一幀目標(biāo)跟蹤定位結(jié)束后,通過(guò)模型更新機(jī)制對(duì)濾波器模型進(jìn)行更新,更新的過(guò)程需要再次對(duì)目標(biāo)樣本進(jìn)行特征提取,導(dǎo)致跟蹤算法變慢。同時(shí),因?yàn)楦櫮繕?biāo)存在被同類物體或背景遮擋的現(xiàn)象,在跟蹤結(jié)果準(zhǔn)確度低的情況下,對(duì)濾波器模型更新會(huì)導(dǎo)致模型污染,造成跟蹤漂移,最終導(dǎo)致跟蹤失敗。為了減少計(jì)算負(fù)擔(dān),提高跟蹤精度,本算法在模型更新模塊中使用了最大響應(yīng)值和平均相對(duì)峰值兩種更新指標(biāo),當(dāng)跟蹤結(jié)果符合更新條件時(shí)才對(duì)模型進(jìn)行更新。(3)目標(biāo)尺度自適應(yīng)。目標(biāo)跟蹤過(guò)程中,跟蹤目標(biāo)與攝像頭的相對(duì)移動(dòng)會(huì)導(dǎo)致目標(biāo)在圖像中尺度的變化,如果算法不能自適應(yīng)尺度變化,在檢測(cè)過(guò)程中會(huì)導(dǎo)致跟蹤結(jié)果漂移,本文通過(guò)增加單獨(dú)的尺度濾波器對(duì)跟蹤目標(biāo)的尺度變化進(jìn)行實(shí)時(shí)的估計(jì),進(jìn)而實(shí)時(shí)調(diào)整樣本采樣框大小,避免跟蹤精度受尺度變化影響。
[Abstract]:Target tracking algorithm is an important technique for processing video image information in computer vision. In the process of target tracking, we have to face the challenges of fast moving target, complex background, occlusion, illumination change and other uncertain factors, so the robustness of tracking algorithm is very high. Recently, the combination of depth learning and target tracking algorithm has become a hotspot in the field of target tracking. In this paper, the convolutional neural network and the correlation filtering algorithm are combined as the research object. Through the hierarchical research of convolutional neural network, the characteristic properties of each layer structure are analyzed in depth. In order to meet the real-time performance of the tracking algorithm, the original VGG-Net network structure is adjusted and the improved convolution neural network is used to extract the feature of the image target. At the same time, the scale adaptation and target model updating of tracking algorithm are improved. The main contents of this paper are as follows: 1) Domain adaptive convolution neural network feature extraction. By analyzing the feature extracted by convolution neural network, the depth feature is combined with the kernel correlation filter tracking algorithm. In this study, VGG-Net is used to extract the features of target samples. The features extracted from the convolution layer in the convolution neural network retain more spatial information, and the features extracted from the convolution layer have the semantic information of the target. In the algorithm, the three-layer convolution feature training filter is used, the coarse-to-fine method is used to synthesize the tracking results of the three filters, and the adaptive updating strategy for the accurate location of the target is realized. In most of the target tracking algorithms, the filter model is updated by model updating mechanism after each frame target tracking is finished, and the updating process needs to extract the feature of the target sample again, which results in the slow down of the tracking algorithm. At the same time, because of the phenomenon that the target is blocked by the same object or background, the model updating of filter will lead to model pollution, tracking drift, and eventually lead to tracking failure when the accuracy of the tracking results is low. In order to reduce the computational burden and improve the tracking accuracy, this algorithm uses two updating indexes, the maximum response value and the average relative peak value, in the model updating module. When the tracking results meet the updating condition, the model is updated. In the process of target tracking, the relative movement of the target and the camera will lead to the change of the target scale in the image. If the algorithm can not adapt the scale change, it will lead to the drift of the tracking result in the detection process. In this paper, a single scale filter is added to estimate the scale change of the tracking target in real time, and then the size of the sample sampling frame is adjusted in real time to avoid the tracking accuracy affected by the scale change.
【學(xué)位授予單位】:西北農(nóng)林科技大學(xué)
【學(xué)位級(jí)別】:碩士
【學(xué)位授予年份】:2017
【分類號(hào)】:TP391.41;TP183
【參考文獻(xiàn)】
相關(guān)期刊論文 前8條
1 王法勝;魯明羽;趙清杰;袁澤劍;;粒子濾波算法[J];計(jì)算機(jī)學(xué)報(bào);2014年08期
2 高文;朱明;賀柏根;吳笑天;;目標(biāo)跟蹤技術(shù)綜述[J];中國(guó)光學(xué);2014年03期
3 呂世良;王曉茜;劉金國(guó);;數(shù)字視頻監(jiān)控系統(tǒng)設(shè)計(jì)與實(shí)現(xiàn)[J];測(cè)控技術(shù);2014年02期
4 陳為民;胡向臻;;基于智能視頻監(jiān)控的安防系統(tǒng)設(shè)計(jì)[J];江西理工大學(xué)學(xué)報(bào);2013年01期
5 葛微;程宇奇;劉春香;陳秋萍;;基于子空間分析的人臉識(shí)別方法研究[J];中國(guó)光學(xué)與應(yīng)用光學(xué);2009年05期
6 侯志強(qiáng);韓崇昭;;視覺(jué)跟蹤技術(shù)綜述[J];自動(dòng)化學(xué)報(bào);2006年04期
7 李蕾;范瑞霞;;視頻壓縮標(biāo)準(zhǔn)在視頻監(jiān)控系統(tǒng)中的應(yīng)用[J];山東輕工業(yè)學(xué)院學(xué)報(bào)(自然科學(xué)版);2006年01期
8 張風(fēng)超,楊杰;紅外圖像序列的目標(biāo)增強(qiáng)和檢測(cè)[J];紅外與激光工程;2004年04期
相關(guān)博士學(xué)位論文 前1條
1 夏瑜;視覺(jué)跟蹤新方法及其應(yīng)用研究[D];江南大學(xué);2013年
相關(guān)碩士學(xué)位論文 前3條
1 黃佳;基于OPENCV的計(jì)算機(jī)視覺(jué)技術(shù)研究[D];華東理工大學(xué);2013年
2 石偉棟;精確制導(dǎo)中目標(biāo)識(shí)別與跟蹤的應(yīng)用研究[D];北京交通大學(xué);2011年
3 王文斐;基于信息融合的雷達(dá)/紅外復(fù)合制導(dǎo)目標(biāo)跟蹤方法研究[D];哈爾濱工業(yè)大學(xué);2006年
,本文編號(hào):1952890
本文鏈接:http://sikaile.net/kejilunwen/zidonghuakongzhilunwen/1952890.html