基于CNN的手勢(shì)姿態(tài)估計(jì)在手勢(shì)識(shí)別中的應(yīng)用
[Abstract]:Gesture recognition is an important research direction in the field of human-computer interaction. Gesture is used as human-computer interface. It is natural, intuitive and close to human communication habits, so it has a wide application prospect. When the gesture recognition algorithm is applied to human-computer interaction, it is often required that the user's gesture plane be parallel to the camera imaging plane, that is, perpendicular to the horizontal plane. This paper presents an algorithm for gesture recognition using gesture attitude estimation method. By using convolutional neural network to estimate the human hand attitude in the depth map, the spatial coordinates of the key points are obtained and then used for gesture recognition, so that the atypical gestures can be recognized as typical gestures. The main work of this paper is as follows: 1. Get depth information based on Kinect, track and segment hand gesture in complex scene. The gesture depth map which can input convolutional neural network can be obtained by morphological processing and data normalization. 2. 2. For the convolution network model for attitude estimation, the accuracy is improved by adding the middle layer of the nonlinear gesture model and using the multi-resolution gesture depth map as the network input. The detection speed is improved by reducing the number of gesture nodes that need to be estimated. Experimental results show that the proposed network model can reduce the average error of gesture attitude estimation by 2.21mm. 3. Based on the ratio between finger tip distance and finger root distance, the curvature of finger can be represented by the ratio of finger tip to finger root distance, and the spatial coordinates of each node can be obtained by gesture attitude estimation. The distance between the knots is further calculated, so this paper applies the ratio of finger bending to the recognition of guessing hand gesture. The average recognition rate of gesture recognition algorithm is 95.8 and the recognition rate of atypical gesture is 94.6.
【學(xué)位授予單位】:南昌大學(xué)
【學(xué)位級(jí)別】:碩士
【學(xué)位授予年份】:2017
【分類(lèi)號(hào)】:TP391.41
【參考文獻(xiàn)】
相關(guān)期刊論文 前6條
1 操小文;薄華;;基于卷積神經(jīng)網(wǎng)絡(luò)的手勢(shì)識(shí)別研究[J];微型機(jī)與應(yīng)用;2016年09期
2 蔡娟;蔡堅(jiān)勇;廖曉東;黃海濤;丁僑俊;;基于卷積神經(jīng)網(wǎng)絡(luò)的手勢(shì)識(shí)別初探[J];計(jì)算機(jī)系統(tǒng)應(yīng)用;2015年04期
3 陶麗君;李翠華;張希婧;李勝睿;;基于Kinect傳感器深度信息的動(dòng)態(tài)手勢(shì)識(shí)別[J];廈門(mén)大學(xué)學(xué)報(bào)(自然科學(xué)版);2013年04期
4 何超;胡章芳;王艷;;一種基于改進(jìn)DTW算法的動(dòng)態(tài)手勢(shì)識(shí)別方法[J];數(shù)字通信;2013年03期
5 曹雛清;李瑞峰;趙立軍;;基于深度圖像技術(shù)的手勢(shì)識(shí)別方法[J];計(jì)算機(jī)工程;2012年08期
6 吳江琴;高文;陳熙霖;;基于數(shù)據(jù)手套輸入的漢語(yǔ)手指字母的識(shí)別[J];模式識(shí)別與人工智能;1999年01期
相關(guān)碩士學(xué)位論文 前10條
1 胡蘇陽(yáng);基于Kinect深度數(shù)據(jù)及組合特征的靜態(tài)手勢(shì)識(shí)別研究[D];南昌大學(xué);2016年
2 曹海波;基于Kinect深度信息的靜態(tài)手勢(shì)識(shí)別方法研究[D];山東大學(xué);2016年
3 吳正文;卷積神經(jīng)網(wǎng)絡(luò)在圖像分類(lèi)中的應(yīng)用研究[D];電子科技大學(xué);2015年
4 范文婕;基于深度圖像的手勢(shì)識(shí)別研究及應(yīng)用[D];南昌大學(xué);2015年
5 何鵬程;改進(jìn)的卷積神經(jīng)網(wǎng)絡(luò)模型及其應(yīng)用研究[D];大連理工大學(xué);2015年
6 王松林;基于Kinect的手勢(shì)識(shí)別與機(jī)器人控制技術(shù)研究[D];北京交通大學(xué);2014年
7 鄭斌玨;基于Kinect深度信息的手勢(shì)識(shí)別[D];杭州電子科技大學(xué);2014年
8 莫舒;基于視覺(jué)的手勢(shì)分割算法的研究[D];華南理工大學(xué);2012年
9 常亞南;基于HMM的動(dòng)態(tài)手勢(shì)識(shí)別[D];華南理工大學(xué);2012年
10 許可;卷積神經(jīng)網(wǎng)絡(luò)在圖像識(shí)別上的應(yīng)用的研究[D];浙江大學(xué);2012年
,本文編號(hào):2297251
本文鏈接:http://sikaile.net/kejilunwen/ruanjiangongchenglunwen/2297251.html