基于聯(lián)合直方圖表示的多源目標(biāo)融合跟蹤方法研究
本文選題:直方圖表示 + 均值漂移; 參考:《廣西師范大學(xué)》2017年碩士論文
【摘要】:目標(biāo)跟蹤是完成視覺(jué)監(jiān)控、人機(jī)交互、車(chē)輛導(dǎo)航等諸多視頻場(chǎng)景分析和理解任務(wù)的基礎(chǔ),已有大量的跟蹤方法被報(bào)道,可將這些方法大致分成單光譜跟蹤和多光譜跟蹤兩大類(lèi)。與單光譜目標(biāo)跟蹤系統(tǒng)相比,多光譜目標(biāo)跟蹤系統(tǒng)在生存能力、時(shí)空覆蓋范圍、可信度等方面都具有明顯的優(yōu)勢(shì),因而被廣泛研究,其中最具代表的是紅外與可見(jiàn)光的融合跟蹤。紅外傳感器是通過(guò)檢測(cè)目標(biāo)輻射的熱能差異來(lái)形成影像,因此在惡劣的光照環(huán)境下要比可見(jiàn)光成像更好,但它無(wú)法捕獲目標(biāo)對(duì)象的顏色和紋理特征?梢(jiàn)光傳感器雖然無(wú)法感知溫度,但在處理多個(gè)熱目標(biāo)交匯時(shí),它通常要優(yōu)于紅外傳感器,特別當(dāng)目標(biāo)對(duì)象間有顯著的顏色和紋理差異時(shí)。因此通過(guò)聯(lián)合它們的數(shù)據(jù),能獲得比使用單個(gè)傳感器更好的跟蹤性能。本文從目標(biāo)表示和目標(biāo)搜索兩個(gè)方面出發(fā),提出了以下兩種基于聯(lián)合直方圖表示的紅外與可見(jiàn)光目標(biāo)融合算法:1.基于聯(lián)合直方圖表示的紅外與可見(jiàn)光目標(biāo)核跟蹤算法。首先,以直方圖為特征表示模型,分別計(jì)算給定候選狀態(tài)下紅外圖像塊的顏色直方圖和可見(jiàn)光圖像塊的顏色直方圖,并采用巴氏系數(shù)分別計(jì)算紅外圖像塊的顏色直方圖與其目標(biāo)模板間的相似度,以及可見(jiàn)光圖像塊的顏色直方圖與其目標(biāo)模板間的相似度;然后,將兩個(gè)相似度進(jìn)行加權(quán)組合以形成目標(biāo)函數(shù)來(lái);接著,對(duì)目標(biāo)函數(shù)進(jìn)行多變量泰勒展開(kāi),得到其線性逼近式,并通過(guò)最大化該逼近式推導(dǎo)出一個(gè)從當(dāng)前候選狀態(tài)到新候選狀態(tài)的目標(biāo)狀態(tài)轉(zhuǎn)移關(guān)系式;最后,根據(jù)該轉(zhuǎn)移關(guān)系式,使用均值漂移程序來(lái)遞歸地獲得目標(biāo)在當(dāng)前幀中的最終狀態(tài)。2.基于聯(lián)合直方圖表示的紅外與可見(jiàn)光目標(biāo)粒子濾波跟蹤算法。首先,以前一幀的跟蹤結(jié)果為初始狀態(tài),采用六參數(shù)的仿射變換模型產(chǎn)生高斯隨機(jī)采樣粒子集;然后,分別計(jì)算給定采樣粒子對(duì)應(yīng)的紅外圖像塊的顏色直方圖和可見(jiàn)光圖像塊的顏色直方圖,并計(jì)算它們各自與其目標(biāo)模板間的相似度;接著,將兩相似度的加權(quán)組合作為粒子濾波跟蹤器的觀測(cè)似然,并啟動(dòng)粒子濾波跟蹤程序得到其后驗(yàn)概率,對(duì)其余粒子重復(fù)上面的步驟,得到所有粒子的后驗(yàn)概率;最后,以所有粒子與其后驗(yàn)概率乘積的期望作為目標(biāo)在當(dāng)前幀中的最終狀態(tài)。該算法克服了基于核的融合跟蹤方法在迭代時(shí)候容易陷入局部最優(yōu)的缺點(diǎn),能較好地處理目標(biāo)的縮放、旋轉(zhuǎn)和形變等仿射運(yùn)動(dòng)變化。對(duì)多組的紅外與可見(jiàn)光圖像序列對(duì)的測(cè)試表明,基于核的融合跟蹤算法具有較高的實(shí)時(shí)性,而基于粒子濾波的融合跟蹤方法能勝任目標(biāo)的仿射運(yùn)動(dòng)變化。兩種融合跟蹤方法在處理遮擋、光照變化以及目標(biāo)交匯等方面都展現(xiàn)出較好的性能。
[Abstract]:Target tracking is the basis of many video scene analysis and understanding tasks, such as visual monitoring, human-computer interaction, vehicle navigation and so on. A large number of tracking methods have been reported. These methods can be divided into two categories: single-spectral tracking and multi-spectral tracking. Compared with single spectral target tracking system, multispectral target tracking system has obvious advantages in survivability, space-time coverage and reliability, so it has been widely studied. One of the most representative is infrared and visible light fusion tracking. Infrared sensor can produce image by detecting the thermal energy difference of target radiation, so it is better than visible light imaging in bad light environment, but it can not capture the color and texture features of target object. Although the visible light sensor can not sense the temperature, it is usually superior to the infrared sensor in dealing with the intersection of multiple thermal targets, especially when there are significant differences in color and texture between the target objects. Therefore, by combining their data, better tracking performance can be achieved than using a single sensor. In this paper, we propose two fusion algorithms of infrared and visible targets based on joint histogram representation, which are composed of two aspects: target representation and target search. Infrared and visible target core tracking algorithm based on joint histogram representation. Firstly, the color histogram of infrared image block and the color histogram of visible light image block are calculated under given candidate state by using histogram as the characteristic representation model. The similarity between the color histogram of the infrared image block and its target template and the similarity between the color histogram of the visible image block and the target template are calculated respectively by using the pasteurian coefficient; then, the similarity between the color histogram of the infrared image block and its target template is calculated. The two similarity degrees are weighted together to form the objective function, and then, the multivariable Taylor expansion of the objective function is carried out, and the linear approximation of the objective function is obtained. A target state transition formula from the current candidate state to the new candidate state is derived by maximizing the approximation. Use the mean-shift program to recursively obtain the final state of the target in the current frame. Particle filter tracking algorithm for infrared and visible targets based on joint histogram representation. First, the tracking result of the previous frame is initial state, and the affine transformation model with six parameters is used to generate the random sampling particle set of Gao Si. The color histogram of the infrared image block corresponding to the given sample particle and the color histogram of the visible image block are calculated respectively, and the similarity between them and their target template is calculated. The weighted combination of two similarity degrees is used as the observation likelihood of particle filter tracker, and the posterior probability is obtained by starting the particle filter tracking program, then repeating the above steps to the other particles to obtain the posterior probability of all particles. The expectation of the product of all particles and the posterior probability is taken as the final state of the target in the current frame. The algorithm overcomes the shortcoming that the kernel-based fusion tracking method is easy to fall into local optimum in iteration, and it can deal with the affine motion changes such as scaling, rotation and deformation of the target. The test of multiple infrared and visible image sequences shows that the fusion tracking algorithm based on kernel has high real-time performance, while the fusion tracking method based on particle filter is capable of changing the affine motion of the target. The two fusion tracking methods show good performance in the processing of occlusion, illumination change and target intersection.
【學(xué)位授予單位】:廣西師范大學(xué)
【學(xué)位級(jí)別】:碩士
【學(xué)位授予年份】:2017
【分類(lèi)號(hào)】:TP391.41
【參考文獻(xiàn)】
相關(guān)期刊論文 前10條
1 賈志鵬;;基于樸素貝葉斯分類(lèi)器的校園信息智能推薦算法[J];軟件工程;2016年12期
2 Xiao YUN;Zhongliang JING;Gang XIAO;Bo JIN;Canlong ZHANG;;A compressive tracking based on time-space Kalman fusion model[J];Science China(Information Sciences);2016年01期
3 趙慧;;基于Harris算子的灰度圖像角點(diǎn)檢測(cè)方法研究[J];產(chǎn)業(yè)與科技論壇;2015年20期
4 周治平;李文慧;;顏色和紋理特征的運(yùn)動(dòng)目標(biāo)檢測(cè)[J];智能系統(tǒng)學(xué)報(bào);2015年05期
5 張燦龍;唐艷平;李志欣;馬海菲;蔡冰;;基于二階空間直方圖的雙核跟蹤[J];電子與信息學(xué)報(bào);2015年07期
6 王江濤;陳得寶;李素文;楊一軍;;局部鑒別分析驅(qū)動(dòng)的紅外與可見(jiàn)光圖像協(xié)同目標(biāo)跟蹤[J];計(jì)算機(jī)輔助設(shè)計(jì)與圖形學(xué)學(xué)報(bào);2014年06期
7 李廣;馮燕;;基于SIFT特征匹配與K-均值聚類(lèi)的運(yùn)動(dòng)目標(biāo)檢測(cè)[J];計(jì)算機(jī)應(yīng)用;2012年10期
8 ;A dual-kernel-based tracking approach for visual target[J];Science China(Information Sciences);2012年03期
9 ;A multi-cue mean-shift target tracking approach based on fuzzified region dynamic image fusion[J];Science China(Information Sciences);2012年03期
10 ;Fusion tracking in color and infrared images using joint sparse representation[J];Science China(Information Sciences);2012年03期
相關(guān)博士學(xué)位論文 前3條
1 劉榮利;基于統(tǒng)計(jì)學(xué)習(xí)的視覺(jué)目標(biāo)跟蹤算法研究[D];上海交通大學(xué);2014年
2 張燦龍;基于多區(qū)域聯(lián)合決策的視頻跟蹤方法研究[D];上海交通大學(xué);2014年
3 尚巖峰;基于主動(dòng)輪廓模型的醫(yī)學(xué)圖像中目標(biāo)提取研究[D];上海交通大學(xué);2010年
相關(guān)碩士學(xué)位論文 前3條
1 楊向南;基于卷積神經(jīng)網(wǎng)絡(luò)和嵌套網(wǎng)絡(luò)的目標(biāo)跟蹤算法研究[D];華僑大學(xué);2016年
2 李敏;基于語(yǔ)音關(guān)鍵詞檢測(cè)的人機(jī)交互研究[D];北京交通大學(xué);2016年
3 張子夫;基于卷積神經(jīng)網(wǎng)絡(luò)的目標(biāo)跟蹤算法研究與實(shí)現(xiàn)[D];吉林大學(xué);2015年
,本文編號(hào):1957121
本文鏈接:http://sikaile.net/kejilunwen/ruanjiangongchenglunwen/1957121.html