基于RGB-D大規(guī)模數(shù)據(jù)集的人體行為識(shí)別算法研究
[Abstract]:In the 21st century, with the development of information technology and the increasingly intelligent of human life, computer vision is increasingly affecting all aspects of people's life, and human behavior identification and analysis, because of its wide application prospect and practical value, In recent years, it has been a hot topic in computer vision. The human behavior recognition, that is, the original video image sequence is analyzed, relevant behavior characteristic information is extracted, and finally, the information is interpreted so as to realize the identification and learning of human behavior. Although the rapid development of computer technology and image processing technology has greatly promoted the research in the field of behavior recognition, and with the popularization of large-data technology, the performance of the algorithm is increasingly dependent on the data set, however, how to select the effective behavior feature, As well as the problems such as occlusion, background single and lack of large amount of sample data in the current data set, the human behavior recognition technology under the complex natural scene based on the large amount of data is still a very challenging research field. The color-depth (RGB-Depth, RGB-D) sensor can provide both color and depth images at the same time, and the 3D depth information can be directly acquired without additional calculation, which provides great convenience for the application of the depth information in the field of human behavior identification. The identification and analysis of human behavior is based on the behavior data set. In the course of the study of behavior recognition, a variety of data sets have been presented, and the existing common RGB-D behavior data sets are due to the limited behavior category, the number of behavior samples and the single background environment. It is difficult to use for behavior recognition in complex natural scenes based on a large amount of data. Therefore, a comprehensive RGB-D large-scale behavior data set is established to promote the research of human behavior recognition in complex natural scene, and three feature extraction algorithms are applied based on the comprehensive data set. The research contents of this paper are as follows: First, the research background, meaning and purpose of human behavior recognition are analyzed, the research status of human behavior recognition is summarized from three aspects of data set, feature extraction and classifier, and the problems facing the research of behavior recognition based on RGB-D are described. The main contents and chapters of this paper are introduced. Secondly, the advantages of RGB-D sensor and the importance of depth information in human behavior recognition are described, and some of the existing RGB-D data sets are described in detail, and their advantages and disadvantages are compared. thirdly, five typical RGB-D data sets are selected, the data in the five data sets are pre-processed, analyzed and finally integrated into a comprehensive RGB-D large data set, and the behavior categories in the RGB-D large-scale data set are re-calibrated, The data storage format is unified. This part mainly describes the establishment of large-scale data set, and introduces the data information, advantage and significance of the large-scale data set of RGB-D. Fourth, based on the RGB-D large-scale data set, three types of features of the depth behavior projection (DMM), the depth cube similarity feature (DCSF) and the curvature scale space (CSS) are extracted. The DMs feature accumulates the absolute difference (motion energy) between two consecutive frame projections in the entire depth video sequence; the DCSF describes the similarity relationship between the scale adaptive 3D depth cubes constructed around the space-time interest point structure; CSS can represent the invariant feature of the human profile curve at different scale levels. The three feature extraction algorithms are tested on the five sub-data sets and the comprehensive large data set, and the cooperative expression classifier (CRC) is used to identify the human behavior. The applicability and validity of the established RGB-D large-scale data set are verified by the comparison and analysis of the experimental results. Finally, the whole work done in this paper is summarized, and the future research direction is expected.
【學(xué)位授予單位】:山東大學(xué)
【學(xué)位級(jí)別】:碩士
【學(xué)位授予年份】:2017
【分類號(hào)】:TP391.41
【相似文獻(xiàn)】
相關(guān)期刊論文 前10條
1 劉相濱,向堅(jiān)持,王勝春;人行為識(shí)別與理解研究探討[J];計(jì)算機(jī)與現(xiàn)代化;2004年12期
2 李寧;須德;傅曉英;袁玲;;結(jié)合人體運(yùn)動(dòng)特征的行為識(shí)別[J];北京交通大學(xué)學(xué)報(bào);2009年02期
3 張偉東;陳峰;徐文立;杜友田;;基于階層多觀測模型的多人行為識(shí)別[J];清華大學(xué)學(xué)報(bào)(自然科學(xué)版);2009年07期
4 吳聯(lián)世;夏利民;羅大庸;;人的交互行為識(shí)別與理解研究綜述[J];計(jì)算機(jī)應(yīng)用與軟件;2011年11期
5 申曉霞;張樺;高贊;薛彥兵;徐光平;;一種魯棒的基于深度數(shù)據(jù)的行為識(shí)別算法[J];光電子.激光;2013年08期
6 鄭胤;陳權(quán)崎;章毓晉;;深度學(xué)習(xí)及其在目標(biāo)和行為識(shí)別中的新進(jìn)展[J];中國圖象圖形學(xué)報(bào);2014年02期
7 曾青松;余明輝;賀衛(wèi)國;李玲;;一種行為識(shí)別的新方法[J];昆明理工大學(xué)學(xué)報(bào)(理工版);2009年06期
8 谷軍霞;丁曉青;王生進(jìn);;基于人體行為3D模型的2D行為識(shí)別[J];自動(dòng)化學(xué)報(bào);2010年01期
9 李英杰;尹怡欣;鄧飛;;一種有效的行為識(shí)別視頻特征[J];計(jì)算機(jī)應(yīng)用;2011年02期
10 王新旭;;基于視覺的人體行為識(shí)別研究[J];中國新通信;2012年21期
相關(guān)會(huì)議論文 前7條
1 苗強(qiáng);周興社;於志文;倪紅波;;一種非覺察式的睡眠行為識(shí)別技術(shù)研究[A];第18屆全國多媒體學(xué)術(shù)會(huì)議(NCMT2009)、第5屆全國人機(jī)交互學(xué)術(shù)會(huì)議(CHCI2009)、第5屆全國普適計(jì)算學(xué)術(shù)會(huì)議(PCC2009)論文集[C];2009年
2 齊娟;陳益強(qiáng);劉軍發(fā);;基于多模信息感知與融合的行為識(shí)別[A];第18屆全國多媒體學(xué)術(shù)會(huì)議(NCMT2009)、第5屆全國人機(jī)交互學(xué)術(shù)會(huì)議(CHCI2009)、第5屆全國普適計(jì)算學(xué)術(shù)會(huì)議(PCC2009)論文集[C];2009年
3 方帥;曹洋;王浩;;視頻監(jiān)控中的行為識(shí)別[A];2007中國控制與決策學(xué)術(shù)年會(huì)論文集[C];2007年
4 黃紫藤;吳玲達(dá);;監(jiān)控視頻中簡單人物行為識(shí)別研究[A];第18屆全國多媒體學(xué)術(shù)會(huì)議(NCMT2009)、第5屆全國人機(jī)交互學(xué)術(shù)會(huì)議(CHCI2009)、第5屆全國普適計(jì)算學(xué)術(shù)會(huì)議(PCC2009)論文集[C];2009年
5 安國成;羅志強(qiáng);李洪研;;改進(jìn)運(yùn)動(dòng)歷史圖的異常行為識(shí)別算法[A];第八屆中國智能交通年會(huì)優(yōu)秀論文集——智能交通與安全[C];2013年
6 王忠民;曹棟;;坐標(biāo)轉(zhuǎn)換在移動(dòng)用戶行為識(shí)別中的應(yīng)用研究[A];2013年全國通信軟件學(xué)術(shù)會(huì)議論文集[C];2013年
7 劉威;李石堅(jiān);潘綱;;uRecorder:基于位置的社會(huì)行為自動(dòng)日志[A];第18屆全國多媒體學(xué)術(shù)會(huì)議(NCMT2009)、第5屆全國人機(jī)交互學(xué)術(shù)會(huì)議(CHCI2009)、第5屆全國普適計(jì)算學(xué)術(shù)會(huì)議(PCC2009)論文集[C];2009年
相關(guān)重要報(bào)紙文章 前4條
1 李晨光;導(dǎo)入CIS要注意什么?[N];河北經(jīng)濟(jì)日?qǐng)?bào);2001年
2 農(nóng)發(fā)行鹿邑支行黨支部書記 行長 劉永貞;發(fā)行形象與文化落地農(nóng)[N];周口日?qǐng)?bào);2007年
3 東林;行為識(shí)別新技術(shù)讓監(jiān)控沒有“死角”[N];人民公安報(bào);2007年
4 田凱 徐蕊 李政育 信木祥;博物館安全的國際經(jīng)驗(yàn)[N];中國文物報(bào);2014年
相關(guān)博士學(xué)位論文 前10條
1 邵延華;基于計(jì)算機(jī)視覺的人體行為識(shí)別研究[D];重慶大學(xué);2015年
2 仝鈺;基于條件隨機(jī)場的智能家居行為識(shí)別研究[D];大連海事大學(xué);2015年
3 馮銀付;多模態(tài)人體行為識(shí)別技術(shù)研究[D];浙江大學(xué);2015年
4 姜新波;基于三維骨架序列的人體行為識(shí)別研究[D];山東大學(xué);2015年
5 裴利沈;視頻中人體行為識(shí)別若干問題研究[D];電子科技大學(xué);2016年
6 周同馳;行為識(shí)別中基于局部時(shí)空關(guān)系的特征模型研究[D];東南大學(xué);2016年
7 徐海燕;復(fù)雜環(huán)境下行為識(shí)別特征提取方法研究[D];東南大學(xué);2016年
8 吳云鵬;集體行為的識(shí)別與仿真研究[D];鄭州大學(xué);2017年
9 劉艷秋;舍飼環(huán)境下母羊產(chǎn)前典型行為識(shí)別方法研究[D];內(nèi)蒙古農(nóng)業(yè)大學(xué);2017年
10 何衛(wèi)華;人體行為識(shí)別關(guān)鍵技術(shù)研究[D];重慶大學(xué);2012年
相關(guān)碩士學(xué)位論文 前10條
1 唐小琴;基于全局和局部運(yùn)動(dòng)模式的人體行為識(shí)別研究[D];西南大學(xué);2015年
2 胡秋揚(yáng);可穿戴式個(gè)人室內(nèi)位置和行為監(jiān)測系統(tǒng)[D];浙江大學(xué);2015年
3 陳鈺昕;基于時(shí)空特性的人體行為識(shí)別研究[D];燕山大學(xué);2015年
4 任亮;智能車環(huán)境下車輛典型行為識(shí)別方法研究[D];長安大學(xué);2015年
5 金澤豪;并行化的人體行為識(shí)別方法研究與實(shí)現(xiàn)[D];華南理工大學(xué);2015年
6 王呈;穿戴式多傳感器人體日;顒(dòng)監(jiān)測系統(tǒng)設(shè)計(jì)與實(shí)現(xiàn)[D];南京理工大學(xué);2015年
7 王露;基于稀疏時(shí)空特征的人體行為識(shí)別研究[D];蘇州大學(xué);2015年
8 于靜;基于物品信息和人體深度信息的行為識(shí)別研究[D];山東大學(xué);2015年
9 章瑜;人體運(yùn)動(dòng)行為識(shí)別相關(guān)方法研究[D];南京師范大學(xué);2015年
10 趙揚(yáng);家庭智能空間下基于行走軌跡的人體行為理解[D];山東大學(xué);2015年
,本文編號(hào):2484981
本文鏈接:http://sikaile.net/kejilunwen/ruanjiangongchenglunwen/2484981.html