基于Kinect動作驅(qū)動的三維細微面部表情實時模擬
發(fā)布時間:2019-05-08 09:01
【摘要】:產(chǎn)生引人注目的動態(tài)面部表情動畫在計算機圖形學中是一個具有挑戰(zhàn)性的方面。近年來,虛擬角色越來越多的出現(xiàn)在計算機游戲、廣告以及電影制作中,使得具有細微面部表情的角色動畫變得越來越重要。本文提出一種生成三維細微面部表情實時動畫的新技術,驅(qū)動三維面部網(wǎng)格模型生成帶有細微面部表情特征運動的虛擬三維角色動畫。 首先,為了實時捕獲用戶的不同表情狀態(tài)特征,利用微軟的Kinect3D體感攝影機對用戶的面部表情進行實時的跟蹤,通過分析捕捉得到的人臉運動數(shù)據(jù),將運動數(shù)據(jù)分解為兩個部分:頭部的剛性運動和面部表情的運動。相對于依賴特定硬件設備的人體運動捕捉系統(tǒng)來說,Kinect降低了系統(tǒng)的硬件成本和調(diào)試維護費用,并且對于自然環(huán)境的復雜背景具有很好的適應性。 其次,在用戶面部表情的捕獲和數(shù)據(jù)處理的基礎上,利用拉普拉斯坐標局部細節(jié)保留的性質(zhì),使用拉普拉斯變形的方法把捕獲的面部表情映射到一個中性的三維人臉模型上,對虛擬的三維人臉模型進行姿態(tài)重建,產(chǎn)生具有睜眼閉眼和嘴部運動等與用戶表情狀態(tài)一致的虛擬三維角色動畫。 再次,為了產(chǎn)生帶有皺紋等細微面部表情特征的實時表情動畫,生成帶有毛孔、胡須以及凹凸不平粗糙感的皮膚紋理,,利用GPU進行光照和法線貼圖渲染,再根據(jù)Kinect捕獲的動作單元計算動態(tài)紋理映射和皺紋產(chǎn)生的權重,并通過引入皺紋函數(shù)實時模擬皺紋運動狀態(tài),產(chǎn)生動作驅(qū)動的面部表情實時動畫。 最后,利用專業(yè)圖形程序接口OpenGL和高級著色語言GLSL設計并實現(xiàn)了實時細微面部表情仿真系統(tǒng)。實驗表明,利用本文的方法可以產(chǎn)生動作驅(qū)動的逼真細微面部表情實時動畫,適用于數(shù)字娛樂、視頻會議等領域。
[Abstract]:Creating compelling animation of dynamic facial expressions is a challenging aspect of computer graphics. In recent years, more and more virtual characters appear in computer games, advertising and film production, making character animation with subtle facial expressions more and more important. In this paper, a new technique to generate real-time animation of 3D fine facial expression is proposed, which drives 3D facial mesh model to generate virtual 3D character animation with fine facial expression feature motion. Firstly, in order to capture the characteristics of different facial expressions in real time, the Kinect3D somatosensory camera of Microsoft is used to track the facial expressions of the user in real time, and the facial motion data obtained from the analysis and capture are analyzed and captured. The motion data is divided into two parts: the rigid motion of the head and the movement of the facial expression. Compared with the human motion capture system which depends on specific hardware devices, Kinect reduces the hardware cost and debugging and maintenance costs of the system, and has a good adaptability to the complex background of the natural environment. Secondly, based on the capture and data processing of the user's facial expression, the captured facial expression is mapped to a neutral three-dimensional face model by using the property of local detail preservation in Laplace coordinates, and the Laplace deformation method is used to map the captured facial expression to a neutral three-dimensional face model. The virtual 3D face model is reconstructed to generate virtual 3D character animation with eye-closing and mouth motion, which is consistent with the user's expression state. Thirdly, in order to produce real-time facial animation with fine facial expression features such as wrinkles, and to generate skin textures with pores, whiskers, and uneven roughness, GPU is used for lighting and normal mapping rendering. Then the weight of dynamic texture mapping and wrinkle is calculated according to the action unit captured by Kinect. The real-time animation of facial expression driven by motion is generated by introducing wrinkle function into real-time simulation of wrinkle motion state. Finally, a real-time fine facial expression simulation system is designed and implemented by using professional graphic program interface (OpenGL) and advanced coloring language (GLSL). Experiments show that the proposed method can generate real-time animation of real-time action-driven facial expressions, which is suitable for digital entertainment, video conferencing and other fields.
【學位授予單位】:燕山大學
【學位級別】:碩士
【學位授予年份】:2013
【分類號】:TP391.41
本文編號:2471780
[Abstract]:Creating compelling animation of dynamic facial expressions is a challenging aspect of computer graphics. In recent years, more and more virtual characters appear in computer games, advertising and film production, making character animation with subtle facial expressions more and more important. In this paper, a new technique to generate real-time animation of 3D fine facial expression is proposed, which drives 3D facial mesh model to generate virtual 3D character animation with fine facial expression feature motion. Firstly, in order to capture the characteristics of different facial expressions in real time, the Kinect3D somatosensory camera of Microsoft is used to track the facial expressions of the user in real time, and the facial motion data obtained from the analysis and capture are analyzed and captured. The motion data is divided into two parts: the rigid motion of the head and the movement of the facial expression. Compared with the human motion capture system which depends on specific hardware devices, Kinect reduces the hardware cost and debugging and maintenance costs of the system, and has a good adaptability to the complex background of the natural environment. Secondly, based on the capture and data processing of the user's facial expression, the captured facial expression is mapped to a neutral three-dimensional face model by using the property of local detail preservation in Laplace coordinates, and the Laplace deformation method is used to map the captured facial expression to a neutral three-dimensional face model. The virtual 3D face model is reconstructed to generate virtual 3D character animation with eye-closing and mouth motion, which is consistent with the user's expression state. Thirdly, in order to produce real-time facial animation with fine facial expression features such as wrinkles, and to generate skin textures with pores, whiskers, and uneven roughness, GPU is used for lighting and normal mapping rendering. Then the weight of dynamic texture mapping and wrinkle is calculated according to the action unit captured by Kinect. The real-time animation of facial expression driven by motion is generated by introducing wrinkle function into real-time simulation of wrinkle motion state. Finally, a real-time fine facial expression simulation system is designed and implemented by using professional graphic program interface (OpenGL) and advanced coloring language (GLSL). Experiments show that the proposed method can generate real-time animation of real-time action-driven facial expressions, which is suitable for digital entertainment, video conferencing and other fields.
【學位授予單位】:燕山大學
【學位級別】:碩士
【學位授予年份】:2013
【分類號】:TP391.41
【參考文獻】
相關期刊論文 前6條
1 王振;;基于關鍵幀的三維人臉皺紋動畫[J];電腦與信息技術;2010年05期
2 周仁琴;劉福新;;面向移動數(shù)字娛樂的卡通人臉動畫系統(tǒng)[J];計算機工程與應用;2009年01期
3 王玉順;肖俊;莊越挺;王宇杰;;基于運動傳播和Isomap分析的三維人臉動畫編輯與合成[J];計算機輔助設計與圖形學學報;2008年12期
4 杜志軍;王陽生;;利用主動外觀模型合成動態(tài)人臉表情[J];計算機輔助設計與圖形學學報;2009年11期
5 張劍;;融合SFM和動態(tài)紋理映射的視頻流三維表情重建[J];計算機輔助設計與圖形學學報;2010年06期
6 姜大龍,高文,王兆其,陳益強;面向紋理特征的真實感三維人臉動畫方法[J];計算機學報;2004年06期
本文編號:2471780
本文鏈接:http://sikaile.net/wenyilunwen/guanggaoshejilunwen/2471780.html