天堂国产午夜亚洲专区-少妇人妻综合久久蜜臀-国产成人户外露出视频在线-国产91传媒一区二区三区

當(dāng)前位置:主頁 > 科技論文 > 軟件論文 >

基于特征稀疏表示的多行人跟蹤算法研究

發(fā)布時(shí)間:2018-02-06 07:33

  本文關(guān)鍵詞: 行人檢測 稀疏表示 數(shù)據(jù)關(guān)聯(lián) 多行人跟蹤 出處:《西南大學(xué)》2017年碩士論文 論文類型:學(xué)位論文


【摘要】:近年來計(jì)算機(jī)與信息技術(shù)飛速發(fā)展,伴隨而來的是圖像、視頻等信息數(shù)據(jù)的增長,同時(shí)促進(jìn)了計(jì)算機(jī)視覺與人工智能等領(lǐng)域的發(fā)展。視頻目標(biāo)跟蹤技術(shù)作為計(jì)算機(jī)視覺領(lǐng)域的一大研究熱點(diǎn),在智能視頻監(jiān)控、人機(jī)交互和智能安防等應(yīng)用中都有十分良好的發(fā)展前景。視頻目標(biāo)跟蹤的主要任務(wù)是對視頻中感興趣的目標(biāo)進(jìn)行持續(xù)并準(zhǔn)確地定位。雖然目前在視頻目標(biāo)跟蹤領(lǐng)域的研究已取得一定成果,但是由于實(shí)際場景中環(huán)境的復(fù)雜與多變性、目標(biāo)間不可避免的交互與遮擋問題以及目標(biāo)自身的尺度變化等因素,使視頻目標(biāo)跟蹤技術(shù)離實(shí)際應(yīng)用還有一定的距離。因此,視頻目標(biāo)跟蹤技術(shù)具有很大的研究價(jià)值。人類作為社會(huì)的主體,是視頻目標(biāo)跟蹤領(lǐng)域的重要研究對象,然而在具體的視頻場景中通常不止一個(gè)行人,所以本文研究的是多行人跟蹤。由于基于稀疏編碼的目標(biāo)描述子對部分遮擋的目標(biāo)具有魯棒性,因此,本文引入了稀疏表示模型對目標(biāo)進(jìn)行描述,并提出了基于特征稀疏表示的多行人跟蹤算法。為了更好地區(qū)別目標(biāo)與背景,針對每個(gè)目標(biāo)構(gòu)建一個(gè)基于稀疏表示的分類器。而對視頻中每個(gè)行人的跟蹤則會(huì)利用分類器,并采用基于貝葉斯推理的跟蹤方法,將目標(biāo)狀態(tài)的最優(yōu)估計(jì)作為跟蹤結(jié)果輸出。最后通過一個(gè)集成框架將多個(gè)單目標(biāo)跟蹤器整合在一起,從而實(shí)現(xiàn)多行人跟蹤。對于目標(biāo)的描述,主要是利用構(gòu)建的過完備字典,提取目標(biāo)的聯(lián)合特征(灰度特征、HOG特征和LBP特征)并對其進(jìn)行稀疏分解,用聯(lián)合特征的稀疏系數(shù)作為目標(biāo)的描述子。而行人在場景中從出現(xiàn)到消失的過程則需對行人目標(biāo)進(jìn)行持續(xù)地定位以完成跟蹤:首先對應(yīng)每個(gè)目標(biāo)構(gòu)建其外觀模型,包括過完備字典以及分類器的構(gòu)建。當(dāng)新的圖像幀到來時(shí),采用基于貝葉斯推理的方法估計(jì)目標(biāo)的最優(yōu)狀態(tài)。對于每個(gè)目標(biāo)而言,都會(huì)對應(yīng)一個(gè)獨(dú)立的跟蹤器對其進(jìn)行跟蹤。而本文研究的是多行人跟蹤,因此本文設(shè)計(jì)了一個(gè)集成框架將多個(gè)單目標(biāo)跟蹤器整合在一起。在這個(gè)跟蹤框架下,主要對多個(gè)單目標(biāo)跟蹤器確定行人的起點(diǎn)和終點(diǎn),以及關(guān)聯(lián)對應(yīng)不同幀的行人。獨(dú)立跟蹤器中行人的起點(diǎn)和終點(diǎn)主要以每幀的行人檢測結(jié)果作為依據(jù)進(jìn)行進(jìn)一步判斷;不同幀行人的關(guān)聯(lián)則是利用分類器,解決檢測結(jié)果與多個(gè)跟蹤目標(biāo)間的數(shù)據(jù)關(guān)聯(lián)問題。為了驗(yàn)證本文算法的有效性,我們分別在PETS09 S2L1,Town Center和Parking Lot三個(gè)標(biāo)準(zhǔn)數(shù)據(jù)集上進(jìn)行驗(yàn)證。實(shí)驗(yàn)結(jié)果表明,本文提出的基于特征稀疏表示的多行人跟蹤算法具有較好的跟蹤效果。
[Abstract]:In recent years, with the rapid development of computer and information technology, image, video and other information data growth. At the same time, it promotes the development of computer vision and artificial intelligence. Video target tracking technology as a major research hotspot in the field of computer vision, in intelligent video surveillance. The main task of video target tracking is to continuously and accurately locate the object of interest in video. Although currently in video target tracking lead. Some achievements have been made in the research of domain. However, due to the complexity and variability of the environment in the actual scene, the inevitable interaction and occlusion between the targets, as well as the changes in the scale of the target itself, and so on. Video target tracking technology is still far from practical application. Therefore, video target tracking technology has great research value. As the main body of society, human is an important research object in video target tracking field. However, there is usually more than one pedestrian in the specific video scene, so this paper studies multi-pedestrian tracking. Because the target descriptor based on sparse coding is robust to partially occluded targets. In this paper, a sparse representation model is introduced to describe the target, and a multi-pedestrian tracking algorithm based on feature sparse representation is proposed to better distinguish the target from the background. A sparse representation based classifier is constructed for each target, and the classifier is used to track every pedestrian in the video, and the Bayesian reasoning based tracking method is adopted. The optimal estimation of the target state is taken as the result of tracking. Finally, a single target tracker is integrated into a single target tracker to achieve multi-pedestrian tracking. The description of the target is given. The main purpose of this paper is to extract the joint features of the target (hog and LBP features) by using the constructed overcomplete dictionary and to decompose them sparsely. The sparse coefficient of the joint feature is used as the description of the target, while the pedestrian in the scene from appearance to disappearance needs to continuously locate the pedestrian target to complete the tracking. First, build its appearance model for each target. When the new image frame comes, the Bayesian reasoning method is used to estimate the optimal state of the target. This paper studies multi-pedestrian tracking, so this paper designs an integrated framework to integrate multiple single-target trackers together. The starting point and end point of pedestrian are determined by multiple single target trackers. The starting point and end point of pedestrian in the independent tracker are judged by the pedestrian detection results of each frame. Different frames of pedestrian association is to use classifier to solve the problem of data association between detection results and multiple tracking targets. In order to verify the effectiveness of this algorithm, we use PETS09 S2L1. Town Center and Parking Lot are tested on three standard data sets. The proposed multi-pedestrian tracking algorithm based on feature sparse representation has better tracking effect.
【學(xué)位授予單位】:西南大學(xué)
【學(xué)位級別】:碩士
【學(xué)位授予年份】:2017
【分類號(hào)】:TP391.41

【參考文獻(xiàn)】

相關(guān)期刊論文 前1條

1 王博宇;李杰偉;;中國交通事故的統(tǒng)計(jì)分析及對策[J];當(dāng)代經(jīng)濟(jì);2015年20期

相關(guān)博士學(xué)位論文 前1條

1 劉成云;行車環(huán)境下多特征融合的交通標(biāo)識(shí)檢測與識(shí)別研究[D];山東大學(xué);2016年

相關(guān)碩士學(xué)位論文 前8條

1 呼靜靜;基于序列圖像的行人檢測與跟蹤方法研究[D];太原理工大學(xué);2016年

2 陸偉;運(yùn)動(dòng)目標(biāo)檢測與跟蹤算法的研究及應(yīng)用[D];安徽理工大學(xué);2016年

3 王婧;基于最小生成樹模型的多目標(biāo)跟蹤算法研究[D];沈陽大學(xué);2016年

4 劉璐;行人視頻檢測與跟蹤方法研究[D];北京交通大學(xué);2015年

5 王倩;圖像運(yùn)動(dòng)目標(biāo)檢測與跟蹤算法研究及應(yīng)用[D];電子科技大學(xué);2015年

6 陳金輝;靜態(tài)圖像行人檢測算法研究[D];華東理工大學(xué);2015年

7 向應(yīng);基于視頻的行人檢測與跟蹤技術(shù)研究[D];西南交通大學(xué);2014年

8 梁敏;基于粒子濾波的多目標(biāo)跟蹤算法研究[D];西安電子科技大學(xué);2010年

,

本文編號(hào):1493926

資料下載
論文發(fā)表

本文鏈接:http://sikaile.net/kejilunwen/ruanjiangongchenglunwen/1493926.html


Copyright(c)文論論文網(wǎng)All Rights Reserved | 網(wǎng)站地圖 |

版權(quán)申明:資料由用戶bbdd4***提供,本站僅收錄摘要或目錄,作者需要?jiǎng)h除請E-mail郵箱bigeng88@qq.com