基于結(jié)構(gòu)支持向量機(jī)的目標(biāo)跟蹤算法研究
發(fā)布時(shí)間:2018-10-17 19:09
【摘要】:隨著科技進(jìn)步和社會(huì)發(fā)展,計(jì)算機(jī)視覺(jué)跟隨人工智能的腳步走入人類視野。目標(biāo)檢測(cè)與跟蹤課題,作為計(jì)算機(jī)視覺(jué)的關(guān)鍵問(wèn)題,也是一個(gè)經(jīng)典難題,近年來(lái)受到各個(gè)相關(guān)領(lǐng)域研究學(xué)者的關(guān)注,并且應(yīng)對(duì)不同場(chǎng)景探索不同的檢測(cè)與跟蹤算法。在目標(biāo)檢測(cè)和目標(biāo)跟蹤兩個(gè)領(lǐng)域中,關(guān)鍵的問(wèn)題都在于如何有效描述目標(biāo)、如何讓計(jì)算機(jī)準(zhǔn)確識(shí)別目標(biāo);不同點(diǎn)在于檢測(cè)看重的是精確度,而跟蹤在于實(shí)時(shí)性。應(yīng)對(duì)這兩種需求,本文結(jié)合支持向量機(jī)的優(yōu)秀分類特性,研究了以下檢測(cè)和跟蹤系統(tǒng)。對(duì)于目標(biāo)檢測(cè)系統(tǒng),在其訓(xùn)練階段,首先在每個(gè)滑動(dòng)窗口中分別計(jì)算HOG特征與LBPHF特征,然后將兩者結(jié)合構(gòu)成聯(lián)合特征。接著利用線性支持向量機(jī)(SVM)訓(xùn)練分類器,其中本算法通過(guò)自舉法(Bootstrap Method)不斷更新優(yōu)化分類器,以此獲得最優(yōu)判別模型。在訓(xùn)練階段的基礎(chǔ)上,將提取所得的聯(lián)合特征輸入上一階段所獲得的分類器中進(jìn)行判別,最后采用非極大值抑制(NMS)的融合方法對(duì)重疊檢測(cè)窗口進(jìn)行融合,以此獲得最終的檢測(cè)結(jié)果。實(shí)驗(yàn)證明改進(jìn)后的方法滿足檢出率高、計(jì)算復(fù)雜度低、抗行人肢體偏轉(zhuǎn)干擾能力強(qiáng)等要求。對(duì)于目標(biāo)跟蹤系統(tǒng),首先利用無(wú)模型的跟蹤框架,運(yùn)用改進(jìn)的HOG-LBPHF對(duì)目標(biāo)進(jìn)行表觀,并且結(jié)合目標(biāo)間的結(jié)構(gòu)信息,以此來(lái)訓(xùn)練SVM。其次采用被動(dòng)主動(dòng)感知器對(duì)分類平面進(jìn)行優(yōu)化。最后用最小生成樹(shù)模型確定下一幀的所在位置。經(jīng)過(guò)實(shí)驗(yàn)對(duì)比,本算法具有良好的跟蹤性能。
[Abstract]:With the progress of science and technology and social development, computer vision follows the pace of artificial intelligence into human vision. Target detection and tracking, as a key problem of computer vision, is also a classical problem. In recent years, it has attracted the attention of researchers in various related fields, and different detection and tracking algorithms should be explored in different scenes. In the two fields of target detection and target tracking, the key problems are how to describe the target effectively and how to accurately identify the target by computer. In response to these two requirements, this paper studies the following detection and tracking systems combined with the excellent classification characteristics of support vector machines. In the training phase of the target detection system, the HOG feature and the LBPHF feature are calculated in each sliding window, and then the two features are combined to form a joint feature. Then the classifier is trained by linear support vector machine (SVM), where the optimal discriminant model is obtained by updating the optimal classifier by bootstrap (Bootstrap Method). On the basis of the training stage, the extracted joint features are input into the classifier obtained in the previous stage to discriminate. Finally, the overlapping detection window is fused using the fusion method of non-maximum suppression (NMS). Finally, the final test results are obtained. The experimental results show that the improved method meets the requirements of high detection rate, low computational complexity and strong anti-pedestrian limb deflection interference. For the target tracking system, we first use the model-free tracking framework, use the improved HOG-LBPHF to visualize the target, and combine the structure information between the targets to train the SVM.. Secondly, passive active perceptron is used to optimize the classification plane. Finally, the location of the next frame is determined by the minimum spanning tree model. The experimental results show that the algorithm has good tracking performance.
【學(xué)位授予單位】:哈爾濱理工大學(xué)
【學(xué)位級(jí)別】:碩士
【學(xué)位授予年份】:2016
【分類號(hào)】:TP391.41;TP18
本文編號(hào):2277675
[Abstract]:With the progress of science and technology and social development, computer vision follows the pace of artificial intelligence into human vision. Target detection and tracking, as a key problem of computer vision, is also a classical problem. In recent years, it has attracted the attention of researchers in various related fields, and different detection and tracking algorithms should be explored in different scenes. In the two fields of target detection and target tracking, the key problems are how to describe the target effectively and how to accurately identify the target by computer. In response to these two requirements, this paper studies the following detection and tracking systems combined with the excellent classification characteristics of support vector machines. In the training phase of the target detection system, the HOG feature and the LBPHF feature are calculated in each sliding window, and then the two features are combined to form a joint feature. Then the classifier is trained by linear support vector machine (SVM), where the optimal discriminant model is obtained by updating the optimal classifier by bootstrap (Bootstrap Method). On the basis of the training stage, the extracted joint features are input into the classifier obtained in the previous stage to discriminate. Finally, the overlapping detection window is fused using the fusion method of non-maximum suppression (NMS). Finally, the final test results are obtained. The experimental results show that the improved method meets the requirements of high detection rate, low computational complexity and strong anti-pedestrian limb deflection interference. For the target tracking system, we first use the model-free tracking framework, use the improved HOG-LBPHF to visualize the target, and combine the structure information between the targets to train the SVM.. Secondly, passive active perceptron is used to optimize the classification plane. Finally, the location of the next frame is determined by the minimum spanning tree model. The experimental results show that the algorithm has good tracking performance.
【學(xué)位授予單位】:哈爾濱理工大學(xué)
【學(xué)位級(jí)別】:碩士
【學(xué)位授予年份】:2016
【分類號(hào)】:TP391.41;TP18
【參考文獻(xiàn)】
相關(guān)期刊論文 前5條
1 黃凱奇;陳曉棠;康運(yùn)鋒;譚鐵牛;;智能視頻監(jiān)控技術(shù)綜述[J];計(jì)算機(jī)學(xué)報(bào);2015年06期
2 孫凱;嚴(yán)瀟然;謝榮平;;基于手勢(shì)識(shí)別的智能家居人機(jī)交互系統(tǒng)設(shè)計(jì)[J];工業(yè)控制計(jì)算機(jī);2014年04期
3 張春鳳;宋加濤;王萬(wàn)良;;行人檢測(cè)技術(shù)研究綜述[J];電視技術(shù);2014年03期
4 楊利平;辜小花;;用于人臉識(shí)別的相對(duì)梯度直方圖特征描述[J];光學(xué)精密工程;2014年01期
5 孫銳;陳軍;高雋;;基于顯著性檢測(cè)與HOG-NMF特征的快速行人檢測(cè)方法[J];電子與信息學(xué)報(bào);2013年08期
,本文編號(hào):2277675
本文鏈接:http://sikaile.net/kejilunwen/zidonghuakongzhilunwen/2277675.html
最近更新
教材專著