基于Kinect動(dòng)作驅(qū)動(dòng)的三維細(xì)微面部表情實(shí)時(shí)模擬
發(fā)布時(shí)間:2019-05-08 09:01
【摘要】:產(chǎn)生引人注目的動(dòng)態(tài)面部表情動(dòng)畫(huà)在計(jì)算機(jī)圖形學(xué)中是一個(gè)具有挑戰(zhàn)性的方面。近年來(lái),虛擬角色越來(lái)越多的出現(xiàn)在計(jì)算機(jī)游戲、廣告以及電影制作中,使得具有細(xì)微面部表情的角色動(dòng)畫(huà)變得越來(lái)越重要。本文提出一種生成三維細(xì)微面部表情實(shí)時(shí)動(dòng)畫(huà)的新技術(shù),驅(qū)動(dòng)三維面部網(wǎng)格模型生成帶有細(xì)微面部表情特征運(yùn)動(dòng)的虛擬三維角色動(dòng)畫(huà)。 首先,為了實(shí)時(shí)捕獲用戶的不同表情狀態(tài)特征,利用微軟的Kinect3D體感攝影機(jī)對(duì)用戶的面部表情進(jìn)行實(shí)時(shí)的跟蹤,通過(guò)分析捕捉得到的人臉運(yùn)動(dòng)數(shù)據(jù),將運(yùn)動(dòng)數(shù)據(jù)分解為兩個(gè)部分:頭部的剛性運(yùn)動(dòng)和面部表情的運(yùn)動(dòng)。相對(duì)于依賴特定硬件設(shè)備的人體運(yùn)動(dòng)捕捉系統(tǒng)來(lái)說(shuō),Kinect降低了系統(tǒng)的硬件成本和調(diào)試維護(hù)費(fèi)用,并且對(duì)于自然環(huán)境的復(fù)雜背景具有很好的適應(yīng)性。 其次,在用戶面部表情的捕獲和數(shù)據(jù)處理的基礎(chǔ)上,利用拉普拉斯坐標(biāo)局部細(xì)節(jié)保留的性質(zhì),使用拉普拉斯變形的方法把捕獲的面部表情映射到一個(gè)中性的三維人臉模型上,對(duì)虛擬的三維人臉模型進(jìn)行姿態(tài)重建,產(chǎn)生具有睜眼閉眼和嘴部運(yùn)動(dòng)等與用戶表情狀態(tài)一致的虛擬三維角色動(dòng)畫(huà)。 再次,為了產(chǎn)生帶有皺紋等細(xì)微面部表情特征的實(shí)時(shí)表情動(dòng)畫(huà),生成帶有毛孔、胡須以及凹凸不平粗糙感的皮膚紋理,,利用GPU進(jìn)行光照和法線貼圖渲染,再根據(jù)Kinect捕獲的動(dòng)作單元計(jì)算動(dòng)態(tài)紋理映射和皺紋產(chǎn)生的權(quán)重,并通過(guò)引入皺紋函數(shù)實(shí)時(shí)模擬皺紋運(yùn)動(dòng)狀態(tài),產(chǎn)生動(dòng)作驅(qū)動(dòng)的面部表情實(shí)時(shí)動(dòng)畫(huà)。 最后,利用專業(yè)圖形程序接口OpenGL和高級(jí)著色語(yǔ)言GLSL設(shè)計(jì)并實(shí)現(xiàn)了實(shí)時(shí)細(xì)微面部表情仿真系統(tǒng)。實(shí)驗(yàn)表明,利用本文的方法可以產(chǎn)生動(dòng)作驅(qū)動(dòng)的逼真細(xì)微面部表情實(shí)時(shí)動(dòng)畫(huà),適用于數(shù)字娛樂(lè)、視頻會(huì)議等領(lǐng)域。
[Abstract]:Creating compelling animation of dynamic facial expressions is a challenging aspect of computer graphics. In recent years, more and more virtual characters appear in computer games, advertising and film production, making character animation with subtle facial expressions more and more important. In this paper, a new technique to generate real-time animation of 3D fine facial expression is proposed, which drives 3D facial mesh model to generate virtual 3D character animation with fine facial expression feature motion. Firstly, in order to capture the characteristics of different facial expressions in real time, the Kinect3D somatosensory camera of Microsoft is used to track the facial expressions of the user in real time, and the facial motion data obtained from the analysis and capture are analyzed and captured. The motion data is divided into two parts: the rigid motion of the head and the movement of the facial expression. Compared with the human motion capture system which depends on specific hardware devices, Kinect reduces the hardware cost and debugging and maintenance costs of the system, and has a good adaptability to the complex background of the natural environment. Secondly, based on the capture and data processing of the user's facial expression, the captured facial expression is mapped to a neutral three-dimensional face model by using the property of local detail preservation in Laplace coordinates, and the Laplace deformation method is used to map the captured facial expression to a neutral three-dimensional face model. The virtual 3D face model is reconstructed to generate virtual 3D character animation with eye-closing and mouth motion, which is consistent with the user's expression state. Thirdly, in order to produce real-time facial animation with fine facial expression features such as wrinkles, and to generate skin textures with pores, whiskers, and uneven roughness, GPU is used for lighting and normal mapping rendering. Then the weight of dynamic texture mapping and wrinkle is calculated according to the action unit captured by Kinect. The real-time animation of facial expression driven by motion is generated by introducing wrinkle function into real-time simulation of wrinkle motion state. Finally, a real-time fine facial expression simulation system is designed and implemented by using professional graphic program interface (OpenGL) and advanced coloring language (GLSL). Experiments show that the proposed method can generate real-time animation of real-time action-driven facial expressions, which is suitable for digital entertainment, video conferencing and other fields.
【學(xué)位授予單位】:燕山大學(xué)
【學(xué)位級(jí)別】:碩士
【學(xué)位授予年份】:2013
【分類號(hào)】:TP391.41
本文編號(hào):2471780
[Abstract]:Creating compelling animation of dynamic facial expressions is a challenging aspect of computer graphics. In recent years, more and more virtual characters appear in computer games, advertising and film production, making character animation with subtle facial expressions more and more important. In this paper, a new technique to generate real-time animation of 3D fine facial expression is proposed, which drives 3D facial mesh model to generate virtual 3D character animation with fine facial expression feature motion. Firstly, in order to capture the characteristics of different facial expressions in real time, the Kinect3D somatosensory camera of Microsoft is used to track the facial expressions of the user in real time, and the facial motion data obtained from the analysis and capture are analyzed and captured. The motion data is divided into two parts: the rigid motion of the head and the movement of the facial expression. Compared with the human motion capture system which depends on specific hardware devices, Kinect reduces the hardware cost and debugging and maintenance costs of the system, and has a good adaptability to the complex background of the natural environment. Secondly, based on the capture and data processing of the user's facial expression, the captured facial expression is mapped to a neutral three-dimensional face model by using the property of local detail preservation in Laplace coordinates, and the Laplace deformation method is used to map the captured facial expression to a neutral three-dimensional face model. The virtual 3D face model is reconstructed to generate virtual 3D character animation with eye-closing and mouth motion, which is consistent with the user's expression state. Thirdly, in order to produce real-time facial animation with fine facial expression features such as wrinkles, and to generate skin textures with pores, whiskers, and uneven roughness, GPU is used for lighting and normal mapping rendering. Then the weight of dynamic texture mapping and wrinkle is calculated according to the action unit captured by Kinect. The real-time animation of facial expression driven by motion is generated by introducing wrinkle function into real-time simulation of wrinkle motion state. Finally, a real-time fine facial expression simulation system is designed and implemented by using professional graphic program interface (OpenGL) and advanced coloring language (GLSL). Experiments show that the proposed method can generate real-time animation of real-time action-driven facial expressions, which is suitable for digital entertainment, video conferencing and other fields.
【學(xué)位授予單位】:燕山大學(xué)
【學(xué)位級(jí)別】:碩士
【學(xué)位授予年份】:2013
【分類號(hào)】:TP391.41
【參考文獻(xiàn)】
相關(guān)期刊論文 前6條
1 王振;;基于關(guān)鍵幀的三維人臉皺紋動(dòng)畫(huà)[J];電腦與信息技術(shù);2010年05期
2 周仁琴;劉福新;;面向移動(dòng)數(shù)字娛樂(lè)的卡通人臉動(dòng)畫(huà)系統(tǒng)[J];計(jì)算機(jī)工程與應(yīng)用;2009年01期
3 王玉順;肖俊;莊越挺;王宇杰;;基于運(yùn)動(dòng)傳播和Isomap分析的三維人臉動(dòng)畫(huà)編輯與合成[J];計(jì)算機(jī)輔助設(shè)計(jì)與圖形學(xué)學(xué)報(bào);2008年12期
4 杜志軍;王陽(yáng)生;;利用主動(dòng)外觀模型合成動(dòng)態(tài)人臉表情[J];計(jì)算機(jī)輔助設(shè)計(jì)與圖形學(xué)學(xué)報(bào);2009年11期
5 張劍;;融合SFM和動(dòng)態(tài)紋理映射的視頻流三維表情重建[J];計(jì)算機(jī)輔助設(shè)計(jì)與圖形學(xué)學(xué)報(bào);2010年06期
6 姜大龍,高文,王兆其,陳益強(qiáng);面向紋理特征的真實(shí)感三維人臉動(dòng)畫(huà)方法[J];計(jì)算機(jī)學(xué)報(bào);2004年06期
本文編號(hào):2471780
本文鏈接:http://sikaile.net/wenyilunwen/guanggaoshejilunwen/2471780.html
最近更新
教材專著