基于Kinect的室內(nèi)異常行為檢測
[Abstract]:In recent decades, with the continuous progress and improvement of science and technology, video surveillance, especially high-definition system has been popularized, computer vision processing technology has also been improved. Computer vision processing technology is used to process HD video, and computer vision processing technology is applied to the field of security monitoring, in order to improve the security of public places. It mainly detects abnormal behavior in public and alerts people about abnormal behavior. Computer vision technology is widely used in the field of video surveillance. At the same time, it needs efficient algorithms to solve real-time problems. In the field of computer vision, researchers constantly use computers to identify and understand "human behavior", from foreground target detection, target tracking and orientation, and ultimately to understand behavior. Because of the influence of illumination, shadow, occlusion and noise, it is difficult to understand the behavior of the collected video. Because of the appearance of Kinect, the depth image (RGB-D) has come into the field of attention. The Kinect sensor has little external interference, and can recognize the target human body in the dark environment. It can obtain the bone feature and have the spatial characteristic. It can be used to identify human behavior, which arouses high interest, inspiration and solution, and detects abnormal behavior based on Kinect platform. This article uses the Kinect device to detect the abnormal behavior in the room, and RGB-D is the data information obtained. The abnormal behaviors studied in this paper are aimed at indoor scenes and detect these behaviors which do not meet the expectations of people. The abnormal behaviors usually include falling, fighting, chasing and so on. And the detection of abnormal alarm. This paper first describes the algorithms and features used in the three stages of human abnormal behavior detection, analyzes the advantages and disadvantages of these algorithms and features, and discusses the present research situation, problems and difficulties. The feasibility of using image depth information and skeleton node information is analyzed. Secondly, the hardware and software architecture of Kinect are introduced, and how to obtain RGB-D information is described. Then the skeleton node is described. The skeleton node information is extracted from the collected data, and the angle information of the node is used to represent the feature, and the behavior is distinguished by these features. Then, this paper introduces the mainstream human behavior recognition algorithm, this paper uses dynamic warping algorithm to detect human behavior, and improves the algorithm to improve the running efficiency. Finally, the research work is summarized, and the future work and development trend are discussed and prospected.
【學(xué)位授予單位】:吉林大學(xué)
【學(xué)位級別】:碩士
【學(xué)位授予年份】:2017
【分類號】:TP391.41
【相似文獻(xiàn)】
相關(guān)期刊論文 前10條
1 羅瑞琨;魏有毅;尹華彬;徐靜;劉召;劉峰;陳懇;;基于Kinect的主動(dòng)式伴舞機(jī)器人的研究與設(shè)計(jì)[J];機(jī)械設(shè)計(jì)與制造;2013年06期
2 郭迪;孫富春;劉華平;黃文炳;;基于Kinect的冗余機(jī)械臂直線推移操作控制[J];東南大學(xué)學(xué)報(bào)(自然科學(xué)版);2013年S1期
3 韓崢;劉華平;黃文炳;孫富春;高蒙;;基于Kinect的機(jī)械臂目標(biāo)抓取[J];智能系統(tǒng)學(xué)報(bào);2013年02期
4 楊東方;王仕成;劉華平;劉志國;孫富春;;基于Kinect系統(tǒng)的場景建模與機(jī)器人自主導(dǎo)航[J];機(jī)器人;2012年05期
5 ;三大家用游戲機(jī)殊死戰(zhàn) Kinect推動(dòng)Xbox 360迎向2011年龍頭地位[J];電子與電腦;2011年02期
6 ;微軟Kinect的機(jī)會[J];IT時(shí)代周刊;2011年Z1期
7 金燁;;Kinect:微軟新生?[J];中國經(jīng)濟(jì)和信息化;2011年07期
8 ;微軟新版Kinect可讀唇語[J];計(jì)算機(jī)與網(wǎng)絡(luò);2012年Z1期
9 黃露丹;嚴(yán)利民;;基于Kinect深度數(shù)據(jù)的人物檢測[J];計(jì)算機(jī)技術(shù)與發(fā)展;2013年04期
10 羅東;;Kinect,看手勢![J];21世紀(jì)商業(yè)評論;2013年24期
相關(guān)會議論文 前1條
1 郭迪;孫富春;劉華平;黃文炳;;基于Kinect的冗余機(jī)械臂直線推移操作控制[A];2013年中國智能自動(dòng)化學(xué)術(shù)會議論文集(第三分冊)[C];2013年
相關(guān)碩士學(xué)位論文 前10條
1 李鵬飛;基于Kinect的體感識別技術(shù)及其在旗語培訓(xùn)中的應(yīng)用[D];西南交通大學(xué);2015年
2 陳福財(cái);基于Kinect的連續(xù)中國手語識別[D];山東大學(xué);2016年
3 呂巖;微型四軸飛行器設(shè)計(jì)及基于Kinect手勢控制的實(shí)現(xiàn)[D];鄭州大學(xué);2016年
4 陳嘉衍;基于Kinect的動(dòng)態(tài)虛擬聽覺重放[D];華南理工大學(xué);2016年
5 葉平;基于Kinect的實(shí)時(shí)手語識別技術(shù)研究[D];南京航空航天大學(xué);2016年
6 任洪林;基于Kinect的個(gè)性化人體三維動(dòng)作重現(xiàn)與動(dòng)作細(xì)節(jié)比對研究[D];天津大學(xué);2014年
7 陳策;基于Kinect深度視覺的服務(wù)機(jī)器人自定位研究[D];沈陽建筑大學(xué);2016年
8 劉偉康;基于Kinect的靜態(tài)數(shù)字手語識別研究及系統(tǒng)實(shí)現(xiàn)[D];河南大學(xué);2016年
9 劉亞楠;基于Kinect的電梯客流統(tǒng)計(jì)方法研究[D];沈陽建筑大學(xué);2014年
10 張瑩瑩;基于Kinect的大屏幕手勢互動(dòng)系統(tǒng)研究與實(shí)現(xiàn)[D];安徽大學(xué);2017年
,本文編號:2301093
本文鏈接:http://sikaile.net/kejilunwen/ruanjiangongchenglunwen/2301093.html