天堂国产午夜亚洲专区-少妇人妻综合久久蜜臀-国产成人户外露出视频在线-国产91传媒一区二区三区

當(dāng)前位置:主頁 > 科技論文 > 軟件論文 >

視頻動作識別研究

發(fā)布時間:2018-03-08 05:12

  本文選題:動作識別 切入點(diǎn):視頻分析 出處:《江西理工大學(xué)》2017年碩士論文 論文類型:學(xué)位論文


【摘要】:視頻中的人體動作識別是一個非常活躍的研究領(lǐng)域,隨著相機(jī)、手機(jī)等電子產(chǎn)品行業(yè)的快速發(fā)展,對基于視頻中人體動作識別的應(yīng)用提出越來越高的要求。針對人體動作在視頻中的定位問題,如何對視頻中提取的多種特征進(jìn)行有效融合的問題以及如何利用動作標(biāo)簽信息提高分類效果等問題,提出了利用流形度量學(xué)習(xí)的人體動作識別方法。首先根據(jù)人體區(qū)域利用基于人物肢體伸展程度分析的方法,獲取人體區(qū)域的面積變化函數(shù)。由于面積變化函數(shù)隨時間不斷變化的過程中會產(chǎn)生相應(yīng)的噪點(diǎn),為了使得面積函數(shù)體現(xiàn)出本質(zhì)的波動特征,在獲取面積變化函數(shù)之后對面積函數(shù)使用穩(wěn)健的局部加權(quán)平滑方法對面積函數(shù)進(jìn)行平滑。取面積函數(shù)的極小值作為動作的切分點(diǎn)對動作進(jìn)行切分,將后續(xù)的動作識別對象具體化。其次從每一段動作片段中分別提取人體區(qū)域的時域全局特征、空域特征、幀間光流特征以及幀內(nèi)局部旋度特征和散度特征,將這些特征構(gòu)造成為一種7×7的協(xié)方差矩陣描述子將多種特征進(jìn)行融合,在黎曼流形中對動作進(jìn)行描述。最后在訓(xùn)練階段結(jié)合流形度量學(xué)習(xí)方法,根據(jù)訓(xùn)練樣本的標(biāo)簽信息有監(jiān)督地尋找一種在流形空間中更有效地度量方法,提高同類間的聚合度,加大不同類別之間的差異,從而達(dá)到提高動作分類的效果。在實(shí)驗(yàn)階段,對weizmann公共視頻庫的切分實(shí)驗(yàn)統(tǒng)計結(jié)果表明本文提出的視頻切分方法具有很好的切分能力,能夠做好動作識別前的預(yù)處理;在weizmann公共視頻數(shù)據(jù)集上進(jìn)行了流形度量學(xué)習(xí)前后的識別效果對比,結(jié)果表明利用流形度量學(xué)習(xí)方法對動作識別效果提升2.8%;在weizmann和KTH兩個公共視頻數(shù)據(jù)集上的平均識別率分別為95.6%和92.3%,與現(xiàn)有方法的比較結(jié)果表明本文提出的動作識別方法有更好的識別效果。多次實(shí)驗(yàn)結(jié)果表明本文算法在預(yù)處理過程中動作切分效果理想,描述動作所構(gòu)造協(xié)方差矩陣對動作的表達(dá)有良好的多特征融合能力,流形度量學(xué)習(xí)方法對動作識別的準(zhǔn)確性有明顯提高。
[Abstract]:Human motion recognition in video is a very active research field. With the rapid development of camera, mobile phone and other electronic products, Put forward higher and higher requirements for the application of human motion recognition based on video. How to effectively fuse the features extracted from video and how to use the action tag information to improve the classification effect, etc. In this paper, a method of human motion recognition based on manifold metric learning is proposed. Firstly, according to the human body region, the method based on the extension degree of human body is used. Get the area change function of human body. Because the area change function will produce corresponding noise in the process of changing with time, in order to make the area function reflect the essential fluctuation characteristic, After obtaining the area change function, the area function is smoothed by a robust locally weighted smoothing method. The minimum value of the area function is taken as the segmentation point of the action to segment the action. Secondly, the global feature, spatial feature, inter-frame optical flow feature, local curl feature and divergence feature of human body are extracted from each action segment. These features are constructed into a 7 脳 7 covariance matrix descriptor, which combines multiple features to describe actions in Riemannian manifolds. According to the label information of the training sample, we can find a more effective measure method in manifold space, improve the aggregation degree of the same kind, increase the difference between different categories, so as to achieve the effect of improving the classification of action. The experimental results of weizmann common video library show that the proposed video segmentation method has a good segmentation capability and can do a good job of pre-processing before motion recognition. The recognition results of manifold metric before and after learning are compared on weizmann common video data set. The results show that the performance of motion recognition is improved by using manifold metric learning method, and the average recognition rate on two common video data sets of weizmann and KTH is 95.6% and 92.3 respectively. The results of comparison with existing methods show that the motion recognition proposed in this paper is based on this method. The results of many experiments show that the proposed algorithm is effective in action segmentation in the process of preprocessing. The covariance matrix constructed by describing the action has a good multi-feature fusion ability for the expression of the action, and the accuracy of the manifold metric learning method for motion recognition is obviously improved.
【學(xué)位授予單位】:江西理工大學(xué)
【學(xué)位級別】:碩士
【學(xué)位授予年份】:2017
【分類號】:TP391.41

【參考文獻(xiàn)】

相關(guān)期刊論文 前4條

1 王佩瑤;曹江濤;姬曉飛;;基于改進(jìn)時空興趣點(diǎn)特征的雙人交互行為識別[J];計算機(jī)應(yīng)用;2016年10期

2 王鑫;沃波海;管秋;陳勝勇;;基于流形學(xué)習(xí)的人體動作識別[J];中國圖象圖形學(xué)報;2014年06期

3 郭利;姬曉飛;李平;曹江濤;;基于混合特征的人體動作識別改進(jìn)算法[J];計算機(jī)應(yīng)用研究;2013年02期

4 寧忠磊;王宏琦;張正;;一種基于協(xié)方差矩陣的自動目標(biāo)檢測方法[J];中國科學(xué)院研究生院學(xué)報;2010年03期

相關(guān)博士學(xué)位論文 前1條

1 楊江峰;基于視頻的人體動作分析與識別的研究[D];電子科技大學(xué);2015年

相關(guān)碩士學(xué)位論文 前2條

1 屈航;基于流形學(xué)習(xí)的人體動作識別研究[D];電子科技大學(xué);2013年

2 劉翔;多媒體信息綜合檢索的關(guān)鍵技術(shù)研究[D];浙江大學(xué);2004年

,

本文編號:1582529

資料下載
論文發(fā)表

本文鏈接:http://sikaile.net/kejilunwen/ruanjiangongchenglunwen/1582529.html


Copyright(c)文論論文網(wǎng)All Rights Reserved | 網(wǎng)站地圖 |

版權(quán)申明:資料由用戶97c15***提供,本站僅收錄摘要或目錄,作者需要刪除請E-mail郵箱bigeng88@qq.com