天堂国产午夜亚洲专区-少妇人妻综合久久蜜臀-国产成人户外露出视频在线-国产91传媒一区二区三区

當(dāng)前位置:主頁(yè) > 科技論文 > 軟件論文 >

基于深度學(xué)習(xí)的微表情特征提取算法設(shè)計(jì)與實(shí)現(xiàn)

發(fā)布時(shí)間:2018-05-19 17:46

  本文選題:微表情識(shí)別 + 特征提取; 參考:《北京交通大學(xué)》2017年碩士論文


【摘要】:國(guó)內(nèi)外的個(gè)人極端行為、危及公共安全的事件呈上升態(tài)勢(shì),如網(wǎng)絡(luò)謠言、公交縱火和駕車沖撞敏感區(qū)域等。為了對(duì)危險(xiǎn)行為預(yù)警,相關(guān)組織和人員開(kāi)始研究自動(dòng)預(yù)警技術(shù)。表情是人類表達(dá)情感的重要非言語(yǔ)行為,可作為危險(xiǎn)行為預(yù)警過(guò)程的重要線索。目前針對(duì)表情的研究雖已取得了一些成果,但關(guān)注的多是普通表情。除了普通表情,還存在難以被覺(jué)察的微表情,其持續(xù)時(shí)間非常短,與潛在意圖關(guān)系密切,這種表情即為微表情。針對(duì)微表情特征提取是一項(xiàng)交叉性的研究課題,涉及計(jì)算機(jī)、信號(hào)與信息處理和臨床心理學(xué)等多個(gè)學(xué)科,具有重要的理論研究和實(shí)際應(yīng)用意義,有助于促進(jìn)各研究領(lǐng)域的相互交流和推進(jìn)相關(guān)技術(shù)的發(fā)展。本文重點(diǎn)研究了基于深度學(xué)習(xí)的微表情特征提取算法,對(duì)微表情的激活度(Arousal,情緒是覺(jué)醒還是昏睡的程度)、效價(jià)(Valence,情緒表現(xiàn)積極還是消極)、期望度(Expectation,情緒是驚奇的程度)、強(qiáng)度(Power,受外界影響時(shí)控制自己情感的程度)四個(gè)情感屬性進(jìn)行了預(yù)測(cè)分類。最后,將預(yù)測(cè)值經(jīng)過(guò)一個(gè)一維中值濾波進(jìn)行規(guī)整。論文的主要工作包括:(1)提出了一種基于卷積神經(jīng)網(wǎng)絡(luò)(CNN)的微表情特征提取算法。與傳統(tǒng)的特征提取方法(基于梯度的特征提取算法(HOG)、基于局部紋理的特征提取算法(LBP))相比,本文算法所采用的卷積神經(jīng)網(wǎng)絡(luò)架構(gòu)對(duì)人臉表情表現(xiàn)比較集中的地方,如眼角、嘴角等部位激活了更多節(jié)點(diǎn),這樣一方面能夠從原始數(shù)據(jù)中學(xué)習(xí)到具有較高表現(xiàn)力的微表情描述特征,另外可使算法性能不依賴于精確的面部檢測(cè)和定位過(guò)程。(2)提出了一種基于深度學(xué)習(xí)的微表情情感因素預(yù)測(cè)及分類算法。所提算法使用多層感知機(jī)(MLP)代替CNN中的全連接層,即將CNN所提取到的全局特征連接到MLP進(jìn)行訓(xùn)練和識(shí)別,從而對(duì)微表情的激活度、效價(jià)、期望度和強(qiáng)度四個(gè)情感屬性的預(yù)測(cè)和分類。在AVEC2012微表情庫(kù)上的實(shí)驗(yàn)結(jié)果表明,激活度、效價(jià)、期望度和強(qiáng)度這四個(gè)屬性的Topl平均識(shí)別率分別為71.51%、73.14%、66.43%、69.05%。
[Abstract]:Personal extreme behavior at home and abroad, public safety incidents are on the rise, such as Internet rumors, public transport arson and driving in sensitive areas and so on. In order to warn of dangerous behavior, relevant organizations and personnel began to study automatic warning technology. Expression is an important non-verbal act to express human emotion and can be used as an important clue in the process of warning dangerous behavior. Although some achievements have been made in the study of facial expressions, most of them focus on ordinary expressions. In addition to ordinary expressions, there is also an imperceptible microexpression, which lasts for a very short time and is closely related to the underlying intention. Microfacial expression feature extraction is a cross research subject involving computer, signal and information processing, clinical psychology and so on. It has important theoretical research and practical application significance. It helps to promote the exchange of research fields and promote the development of related technologies. This paper focuses on the algorithm of micro-expression feature extraction based on depth learning. Arousal.Arousal, the degree to which emotion is awakened or drowsy, the value of value, the positive or negative emotional performance, the degree of expectation, the degree of surprise, the intensity of power, the degree of control of one's emotions when influenced by the outside) Affective attributes are classified as predictors. Finally, the predicted value is regularized by a one-dimensional median filter. The main work of this paper includes: 1) A novel algorithm for feature extraction based on convolution neural network (CNN) is proposed. Compared with the traditional feature extraction methods (gradient based feature extraction algorithm and local texture based feature extraction algorithm), the convolutional neural network architecture used in this algorithm is more concentrated on facial expression, such as the corner of the eye. The corners of the mouth and other parts activate more nodes, so on the one hand, can learn from the raw data with a more expressive micro-expression description features, In addition, the performance of the algorithm does not depend on the accurate facial detection and localization process. (2) A prediction and classification algorithm for micro-expression emotion factors based on in-depth learning is proposed. The proposed algorithm uses multilayer perception machine (MLP) instead of the full connection layer in CNN, that is, the global features extracted by CNN are connected to MLP for training and recognition, so that the activation degree and titer of micro-expression can be obtained. The prediction and classification of the four emotional attributes of expectation and intensity. The experimental results on AVEC2012 microfacial expression database show that the average recognition rate of activation, titer, expectation and intensity of the four attributes is 71.51 and 73.14, and 66.43 and 69.05, respectively.
【學(xué)位授予單位】:北京交通大學(xué)
【學(xué)位級(jí)別】:碩士
【學(xué)位授予年份】:2017
【分類號(hào)】:TP391.41

【相似文獻(xiàn)】

相關(guān)期刊論文 前10條

1 劉金蓮;王洪春;;人臉識(shí)別中基于熵的局部保持特征提取算法[J];現(xiàn)代電子技術(shù);2012年14期

2 余成波;秦華鋒;;手指靜脈圖像特征提取算法的研究[J];計(jì)算機(jī)工程與應(yīng)用;2008年24期

3 周嬌;李端明;曹泰峰;;導(dǎo)彈圖像特征提取算法及系統(tǒng)架構(gòu)[J];兵工自動(dòng)化;2011年03期

4 毛建鑫;劉煒;侯秋華;孫紅彬;;一種改進(jìn)的水果特征提取算法[J];計(jì)算機(jī)工程與應(yīng)用;2013年06期

5 張向群;張旭;;基于二維判別局部排列的特征提取算法[J];計(jì)算機(jī)工程;2013年08期

6 王化U,

本文編號(hào):1911058


資料下載
論文發(fā)表

本文鏈接:http://sikaile.net/kejilunwen/ruanjiangongchenglunwen/1911058.html


Copyright(c)文論論文網(wǎng)All Rights Reserved | 網(wǎng)站地圖 |

版權(quán)申明:資料由用戶6eb1e***提供,本站僅收錄摘要或目錄,作者需要?jiǎng)h除請(qǐng)E-mail郵箱bigeng88@qq.com
色老汉在线视频免费亚欧| 亚洲欧美日韩色图七区| 国产精品不卡免费视频| 午夜资源在线观看免费高清| 欧美午夜伦理在线观看| 福利视频一区二区在线| 日韩一区欧美二区国产| 99久久精品国产日本| 国产精品日本女优在线观看| 精品人妻一区二区三区在线看| 99亚洲综合精品成人网色播| 韩国激情野战视频在线播放| 五月婷婷六月丁香亚洲| 精品一区二区三区乱码中文| 91插插插外国一区二区婷婷| 97人摸人人澡人人人超碰| 青青操精品视频在线观看| 国产成人精品资源在线观看| 亚洲熟女精品一区二区成人| 国产农村妇女成人精品| 99精品人妻少妇一区二区人人妻| 一区二区三区四区亚洲另类| 深夜日本福利在线观看| 欧美中文字幕日韩精品| 中文字幕久热精品视频在线 | 日韩国产精品激情一区| 久久精品蜜桃一区二区av| 日韩精品成区中文字幕| 成人三级视频在线观看不卡| 午夜精品国产一区在线观看| 在线观看中文字幕91| 国产精品偷拍一区二区| 亚洲伦片免费偷拍一区| 欧美日韩国内一区二区| 好吊色免费在线观看视频| 国产一区二区不卡在线播放 | 日韩高清毛片免费观看| 视频一区二区三区自拍偷| 国产色偷丝袜麻豆亚洲| 五月综合婷婷在线伊人| 国产视频福利一区二区|