基于視頻的多目標(biāo)運(yùn)動(dòng)人體行為識(shí)別
本文選題:四元數(shù) + 動(dòng)作分割; 參考:《東華大學(xué)》2017年碩士論文
【摘要】:視覺(jué)是人類(lèi)觀察和認(rèn)識(shí)世界的重要途徑,基于視頻的運(yùn)動(dòng)人體行為分析的目的是在不需要人為干預(yù)或者盡量少加入人為干預(yù)的條件下,實(shí)現(xiàn)基于視頻的運(yùn)動(dòng)人體檢測(cè)、跟蹤、識(shí)別人的獨(dú)立動(dòng)作、認(rèn)知多人之間以及人與周?chē)h(huán)境之間的交互行為等。基于視頻的人體行為分析,在智能視頻監(jiān)控智能人機(jī)交互以及人工智能領(lǐng)域均有重大的應(yīng)用價(jià)值。本文使用微軟Kinect for Windows 2.0作為人體運(yùn)動(dòng)視頻獲取裝置,結(jié)合人體三維旋轉(zhuǎn)信息,以現(xiàn)如今基于視頻的人體行為分析研究中部分重點(diǎn)難點(diǎn)為出發(fā)點(diǎn),對(duì)人體運(yùn)動(dòng)特征的提取、連續(xù)動(dòng)作分割以及人體行為識(shí)別進(jìn)行了探討研究,獲得了如下幾個(gè)方面的成果:(1)定義深度圖像的幀間距離實(shí)現(xiàn)人體連續(xù)動(dòng)作的分割與識(shí)別。針對(duì)現(xiàn)有的動(dòng)作識(shí)別方法大多數(shù)是建立在假設(shè)待識(shí)別人體行為已經(jīng)給定動(dòng)作的開(kāi)始幀和結(jié)束幀的基礎(chǔ)上這一問(wèn)題,本文定義了深度圖像的幀間距離,提出基于視頻幀間距離(DBKF)的運(yùn)動(dòng)人體行為分割算法,實(shí)現(xiàn)了簡(jiǎn)單方便的單一運(yùn)動(dòng)目標(biāo)的連續(xù)動(dòng)作分割。(2)基于關(guān)節(jié)四元數(shù)的多目標(biāo)人體行為識(shí)別。本文基于四元數(shù)理論結(jié)合三維人體骨架提出人體關(guān)節(jié)四元數(shù)作為特征信息用于建立運(yùn)動(dòng)人體行為表示模型,先將多目標(biāo)人體進(jìn)行個(gè)體分離,再使用支持向量機(jī)(SVM)作為分類(lèi)器對(duì)視頻片段中的人體運(yùn)動(dòng)行為進(jìn)行分類(lèi),實(shí)現(xiàn)了少特征高準(zhǔn)確度的基于視頻的多目標(biāo)人體行為識(shí)別。(3)基于隱馬爾可夫模型(HMM)的異常行為識(shí)別。本文將三維人體旋轉(zhuǎn)信息進(jìn)行拓展運(yùn)用到異常行為識(shí)別中,使用關(guān)節(jié)角作為特征參數(shù),結(jié)合隱馬爾可夫鏈模型算法(HMM),實(shí)現(xiàn)了對(duì)視頻中的連續(xù)運(yùn)動(dòng)人體行為進(jìn)行異常識(shí)別,并與經(jīng)典的團(tuán)塊特征進(jìn)行對(duì)比,分析了不同特征信息對(duì)異常行為識(shí)別準(zhǔn)確性的影響。最后,針對(duì)論文的研究?jī)?nèi)容,探討了研究中的可優(yōu)化的方面,并就此得出新的研究方向。
[Abstract]:Vision is an important way for human beings to observe and understand the world. The purpose of video-based human motion human behavior analysis is to realize video-based human motion detection and tracking without human intervention or as little human intervention as possible. Identify individual actions and recognize interactions between people and their surroundings. Human behavior analysis based on video has great application value in intelligent video surveillance intelligent human-computer interaction and artificial intelligence. In this paper, we use Microsoft Kinect for Windows 2.0 as human motion video acquisition device, combined with 3D rotation information of human body, taking some key and difficult points in the research of human behavior analysis based on video nowadays as the starting point to extract human motion features. Continuous motion segmentation and human body behavior recognition are studied. The following results are obtained: 1) Defining the distance between frames of depth image to realize the segmentation and recognition of human continuous action. In order to solve the problem that most of the existing motion recognition methods are based on the assumption that the human behavior has been given a given start and end frame, this paper defines the distance between frames of the depth image. A motion human behavior segmentation algorithm based on video frame distance (DBKF) is proposed, which realizes simple and convenient continuous motion segmentation of a single moving object. (2) Multi-target human behavior recognition based on joint quaternion is realized. Based on the quaternion theory and the three-dimensional human skeleton, this paper proposes that the quaternion of human joint is used as the characteristic information to establish the behavior representation model of moving human body. Firstly, the multi-target human body is separated from each other. Then the support vector machine (SVM) is used as the classifier to classify the human motion behavior in the video segment, which realizes the recognition of abnormal behavior based on the hidden Markov model (HMMM), which is based on the multi-target human behavior recognition based on video with less features and high accuracy. In this paper, the 3D rotation information of human body is extended to the recognition of abnormal behavior. The joint angle is used as the characteristic parameter and the hidden Markov chain model algorithm is used to realize the abnormal recognition of the continuous moving human behavior in video. Compared with the classical block features, the influence of different feature information on the accuracy of abnormal behavior identification is analyzed. Finally, according to the research content of the thesis, the optimization of the research is discussed, and a new research direction is obtained.
【學(xué)位授予單位】:東華大學(xué)
【學(xué)位級(jí)別】:碩士
【學(xué)位授予年份】:2017
【分類(lèi)號(hào)】:TP391.41
【參考文獻(xiàn)】
相關(guān)期刊論文 前10條
1 劉開(kāi)余;夏斌;;基于Kinect的實(shí)時(shí)人體姿勢(shì)識(shí)別[J];電子設(shè)計(jì)工程;2014年19期
2 李瑞峰;王亮亮;王珂;;人體動(dòng)作行為識(shí)別研究綜述[J];模式識(shí)別與人工智能;2014年01期
3 胡瓊;秦磊;黃慶明;;基于視覺(jué)的人體動(dòng)作識(shí)別綜述[J];計(jì)算機(jī)學(xué)報(bào);2013年12期
4 鄧甜甜;王智靈;朱明清;陳宗海;;基于輪廓圖像空頻域特征的人體姿態(tài)分層識(shí)別算法[J];模式識(shí)別與人工智能;2011年03期
5 韓磊;李君峰;賈云得;;基于時(shí)空單詞的兩人交互行為識(shí)別方法[J];計(jì)算機(jī)學(xué)報(bào);2010年04期
6 谷軍霞;丁曉青;王生進(jìn);;基于人體行為3D模型的2D行為識(shí)別[J];自動(dòng)化學(xué)報(bào);2010年01期
7 凌志剛;趙春暉;梁彥;潘泉;王燕;;基于視覺(jué)的人行為理解綜述[J];計(jì)算機(jī)應(yīng)用研究;2008年09期
8 錢(qián)競(jìng)光;宋雅偉;葉強(qiáng);李勇強(qiáng);唐瀟;;步行動(dòng)作的生物力學(xué)原理及其步態(tài)分析[J];南京體育學(xué)院學(xué)報(bào)(自然科學(xué)版);2006年04期
9 劉俊峰;三維轉(zhuǎn)動(dòng)的四元數(shù)表述[J];大學(xué)物理;2004年04期
10 王亮,胡衛(wèi)明,譚鐵牛;人運(yùn)動(dòng)的視覺(jué)分析綜述[J];計(jì)算機(jī)學(xué)報(bào);2002年03期
相關(guān)博士學(xué)位論文 前2條
1 江焯林;基于計(jì)算機(jī)視覺(jué)的人體動(dòng)作檢測(cè)和識(shí)別方法研究[D];華南理工大學(xué);2010年
2 邢燕;四元數(shù)及其在圖形圖像處理中的應(yīng)用研究[D];合肥工業(yè)大學(xué);2009年
相關(guān)碩士學(xué)位論文 前6條
1 孫巖;基于隱馬爾可夫模型的動(dòng)作識(shí)別的研究與實(shí)現(xiàn)[D];沈陽(yáng)航空航天大學(xué);2015年
2 何杰;視頻監(jiān)控中的人體異常行為識(shí)別方法研究[D];重慶大學(xué);2014年
3 魏萊;基于關(guān)節(jié)點(diǎn)的人體動(dòng)作識(shí)別及姿態(tài)分析研究[D];北京郵電大學(xué);2014年
4 劉飛;基于Kinect骨架信息的人體動(dòng)作識(shí)別[D];東華大學(xué);2014年
5 董坤;視頻監(jiān)控中運(yùn)動(dòng)人體檢測(cè)與異常行為分析研究[D];南京郵電大學(xué);2013年
6 梁群仙;基于激光雷達(dá)三維點(diǎn)云數(shù)據(jù)曲面重構(gòu)技術(shù)的研究[D];北京交通大學(xué);2012年
,本文編號(hào):2023537
本文鏈接:http://sikaile.net/shoufeilunwen/xixikjs/2023537.html