基于多列深度3D卷積神經(jīng)網(wǎng)絡(luò)的手勢(shì)識(shí)別
發(fā)布時(shí)間:2018-01-04 16:37
本文關(guān)鍵詞:基于多列深度3D卷積神經(jīng)網(wǎng)絡(luò)的手勢(shì)識(shí)別 出處:《計(jì)算機(jī)工程》2017年08期 論文類型:期刊論文
更多相關(guān)文章: 視頻圖像序列處理 手勢(shì)識(shí)別 深度學(xué)習(xí) 特征提取 卷積神經(jīng)網(wǎng)絡(luò) 運(yùn)動(dòng)目標(biāo)識(shí)別
【摘要】:傳統(tǒng)2D卷積神經(jīng)網(wǎng)絡(luò)對(duì)于視頻連續(xù)幀圖像的特征提取容易丟失目標(biāo)時(shí)間軸上的運(yùn)動(dòng)信息,導(dǎo)致識(shí)別準(zhǔn)確度較低。為此,提出一種基于多列深度3D卷積神經(jīng)網(wǎng)絡(luò)(3D CNN)的手勢(shì)識(shí)別方法。采用3D卷積核對(duì)連續(xù)幀圖像進(jìn)行卷積操作,提取目標(biāo)的時(shí)間和空間特征捕捉運(yùn)動(dòng)信息。為避免因單組3D CNN特征提取不充分而導(dǎo)致的誤分類,訓(xùn)練多組具有較強(qiáng)分類能力的3D CNN結(jié)構(gòu)組成多列深度3D CNN,該結(jié)構(gòu)通過對(duì)多組3D CNN的輸出結(jié)果進(jìn)行權(quán)衡,將權(quán)重最大的類別判定為最終的輸出結(jié)果。實(shí)驗(yàn)結(jié)果表明,將多列深度3D CNN應(yīng)用于CHGDs數(shù)據(jù)集上進(jìn)行手勢(shì)識(shí)別,識(shí)別率達(dá)到95.09%,與單組3D CNN及傳統(tǒng)2D CNN相比分別提高近7%,20%,對(duì)連續(xù)圖像目標(biāo)識(shí)別具有較好的識(shí)別能力。
[Abstract]:The traditional 2D convolution neural network is easy to lose the moving information on the target time axis for feature extraction of video continuous frame image, which leads to low recognition accuracy. This paper presents a method of hand gesture recognition based on multi-column depth 3D convolution neural network (3D CNN), which uses 3D convolution to check continuous frame images for convolution operation. The temporal and spatial features of the target are extracted to capture the moving information. In order to avoid the false classification caused by the insufficient extraction of the single set of 3D CNN features. The multi-group 3D CNN structure with strong classification ability is trained to form the multi-column depth 3D CNN. The structure tradeoffs the output results of the multi-group 3D CNN. The results show that the multi-column depth 3D CNN is applied to the CHGDs dataset for gesture recognition, and the recognition rate is 95.09%. Compared with single group of 3D CNN and traditional 2D CNN, it can be improved by nearly 7 / 20 and has better recognition ability for continuous image target recognition.
【作者單位】: 長(zhǎng)安大學(xué)電子與控制工程學(xué)院;
【基金】:國(guó)家自然科學(xué)基金青年基金(61203374) 陜西省自然科學(xué)基金國(guó)際合作項(xiàng)目(2014KW01-05)
【分類號(hào)】:TP183;TP391.41
【正文快照】: 中文引用格式:易生,梁華剛,茹鋒.基于多列深度3D卷積神經(jīng)網(wǎng)絡(luò)的手勢(shì)識(shí)別[J].計(jì)算機(jī)工程,2017,43(8):243-248.英文引用格式:Yi Sheng,Liang Huagang,Ru Feng.Hand Gesture Recognition Based on Multi-column Deep 3DConvolutional Neural Netw ork[J].Computer Engineering,20
【相似文獻(xiàn)】
相關(guān)期刊論文 前10條
1 武霞;張崎;許艷旭;;手勢(shì)識(shí)別研究發(fā)展現(xiàn)狀綜述[J];電子科技;2013年06期
2 ;新型手勢(shì)識(shí)別技術(shù)可隔著口袋操作手機(jī)[J];電腦編程技巧與維護(hù);2014年07期
3 任海兵,祝遠(yuǎn)新,徐光,
本文編號(hào):1379230
本文鏈接:http://sikaile.net/kejilunwen/zidonghuakongzhilunwen/1379230.html
最近更新
教材專著