真實(shí)感人臉表情合成的關(guān)鍵技術(shù)研究
發(fā)布時(shí)間:2018-09-08 17:38
【摘要】:人臉表情動(dòng)畫(huà)技術(shù)作為計(jì)算機(jī)圖形學(xué)的一個(gè)重要分支,一直是廣大研究人員競(jìng)相追逐的研究熱點(diǎn)。當(dāng)前,該領(lǐng)域已取得大量研究成果,且被廣泛應(yīng)用于影視、廣告和游戲等產(chǎn)業(yè)!督饎偂贰ⅰ吨腑h(huán)王》、《阿凡達(dá)》等作品中使用了大量的計(jì)算機(jī)合成人臉表情,它們向觀眾展現(xiàn)了人臉表情動(dòng)畫(huà)的無(wú)窮魅力。人臉表情合成技術(shù)的發(fā)展已深入人心。隨著技術(shù)的發(fā)展和時(shí)代的進(jìn)步,人們對(duì)合成表情動(dòng)畫(huà)的真實(shí)感與合成速度的要求也在不斷提高。廣闊的應(yīng)用前景與技術(shù)的可行性必將使這一領(lǐng)域的研究得到越來(lái)越多的投入和關(guān)注。 本文綜述了人臉表情合成的發(fā)展現(xiàn)狀,對(duì)現(xiàn)有方法進(jìn)行分類(lèi)并詳細(xì)分析了各自的優(yōu)缺點(diǎn)。在此基礎(chǔ)上,我們對(duì)真實(shí)感人臉表情合成中的幾個(gè)關(guān)鍵問(wèn)題進(jìn)行了深入探討,提出了系統(tǒng)的解決方案,包括人臉運(yùn)動(dòng)數(shù)據(jù)的采集與人臉表情的提取、真實(shí)感人臉表情的合成以及人臉表情的編輯等。具體地說(shuō),本文的工作主要包括以下幾個(gè)方面: 給出了一種高精度人臉表情的采集與提取方案。對(duì)于光學(xué)運(yùn)動(dòng)捕獲系統(tǒng)采集的人臉運(yùn)動(dòng)數(shù)據(jù),我們使用徑向基函數(shù)(Radial Basis Functions, RBF)插值將其映射到中性人臉模型所在的坐標(biāo)系下,以獲得中性人臉模型空間的人臉運(yùn)動(dòng)數(shù)據(jù)。借助于數(shù)據(jù)采集時(shí)標(biāo)定的與人臉表情變化無(wú)關(guān)的標(biāo)記點(diǎn)(marker),我們從中提取表演者的臉部表情信息,同時(shí)得到相應(yīng)頭部剛體運(yùn)動(dòng)。 提出了一種基于拉普拉斯的表情合成技術(shù),在人臉變形中保留人臉模型上已有細(xì)節(jié)特征,保證了合成表情的真實(shí)感。對(duì)于給定人臉模型,我們首先計(jì)算每個(gè)頂點(diǎn)的拉普拉斯坐標(biāo)。在表情合成時(shí),保持所有頂點(diǎn)的拉普拉斯坐標(biāo)不變,由表情特征點(diǎn)的位移以及選取的固定點(diǎn),計(jì)算人臉模型上其他所有頂點(diǎn)的新位置,從而合成新的人臉表情。結(jié)合提取的頭部剛體運(yùn)動(dòng),我們可以獲得和表演者表情相似、頭部姿態(tài)一致的目標(biāo)人臉模型。 提出了一種基于測(cè)地距離與RBF插值的人臉表情合成新方法。由于人臉模型中嘴巴、眼睛等孔洞區(qū)域的存在,歐氏距離與沿著曲面表面的測(cè)地距離差異較大,直接使用傳統(tǒng)的基于歐氏距離的RBF插值容易產(chǎn)生這些孔洞區(qū)域被拉伸的結(jié)果。本文引入了一種近似測(cè)地距離的計(jì)算規(guī)則,能夠測(cè)量從人臉表情特征點(diǎn)到人臉模型上其他頂點(diǎn)的測(cè)地距離。使用測(cè)地距離衡量人臉模型上頂點(diǎn)間的相互影響,結(jié)合RBF插值,從而合成真實(shí)感人臉表情。 表情編輯是真實(shí)感人臉表情動(dòng)畫(huà)中的重要步驟,本文提出了一種基于時(shí)空的人臉表情動(dòng)畫(huà)編輯方法。我們使用拉普拉斯變形技術(shù)將用戶(hù)對(duì)人臉表情特征點(diǎn)的編輯效果在空間域上傳播到整個(gè)人臉模型。與此同時(shí),用戶(hù)對(duì)某一幀的編輯效果在時(shí)間域上以高斯函數(shù)的衰減模式在給定人臉動(dòng)畫(huà)上的鄰近表情序列間傳播。在編輯過(guò)程中,允許用戶(hù)指定編輯在時(shí)間域上的傳播范圍,這為用戶(hù)提供了人臉表情動(dòng)畫(huà)編輯范圍的局部控制。 提出了一種基于二維形變的人臉表情編輯技術(shù)。在表情編輯中,我們保持人臉模型中每個(gè)三角面片的形狀和比例,使得它們的變形總和達(dá)到最小。同時(shí),根據(jù)對(duì)人臉表情變化的觀察,我們約束在變形中人臉外圍輪廓邊長(zhǎng)總和不變,以合成自然、真實(shí)的人臉表情。該技術(shù)還可以應(yīng)用于服裝設(shè)計(jì)的初期,自動(dòng)地計(jì)算服裝在新姿勢(shì)下的形態(tài),以獲得服裝在不同姿勢(shì)下的效果。這可以避免設(shè)計(jì)師對(duì)類(lèi)似的服裝進(jìn)行簡(jiǎn)單、重復(fù)繪制,為設(shè)計(jì)師的設(shè)計(jì)工作以及與他人進(jìn)行交流思想提供了一種輔助手段,提高了工作效率。我們選用人體骨架作為服裝變形的驅(qū)動(dòng)元素,由初始狀態(tài)下的骨架自動(dòng)獲取變形中的控制點(diǎn)、以新姿勢(shì)下的骨架計(jì)算這些控制點(diǎn)的目標(biāo)位置,從而驅(qū)動(dòng)新姿勢(shì)服裝的變形。 我們分別使用了多個(gè)人臉模型對(duì)上述各種方法進(jìn)行測(cè)試,均取得了不錯(cuò)的實(shí)驗(yàn)結(jié)果。最后,我們對(duì)本文的研究工作進(jìn)行總結(jié),分析了存在的問(wèn)題,并指出了未來(lái)可能的研究方向。
[Abstract]:As an important branch of computer graphics, facial expression animation technology has been a hot research topic for many researchers. At present, a large number of research results have been achieved in this field, and it is widely used in film, television, advertising and games industries. Facial expression, they show the audience the infinite charm of facial expression animation. The development of facial expression synthesis technology has been deeply rooted in the hearts of people. More and more attention has been paid to the research in the field.
This paper reviews the development of facial expression synthesis, classifies the existing methods and analyzes their advantages and disadvantages in detail. On this basis, we discuss several key issues in realistic facial expression synthesis, and propose a systematic solution, including the collection of facial motion data and the extraction of facial expression. In particular, the work of this paper mainly includes the following aspects:
A high-precision facial expression acquisition and extraction scheme is presented. For the facial motion data collected by optical motion capture system, we use Radial Basis Functions (RBF) interpolation to map it to the coordinate system of the neutral face model to obtain the facial motion data in the space of the neutral face model. Markers, which have nothing to do with facial expression changes, are calibrated in data acquisition, from which we extract the facial expression information of performers and obtain the corresponding head movements.
This paper presents a Laplacian-based expression synthesis technique, which preserves the details of the face model and guarantees the authenticity of the expression. For a given face model, we first compute the Laplacian coordinates of each vertex. The displacement of feature points and the selected fixed points are used to calculate the new positions of all other vertices on the face model, and then the new facial expressions are synthesized.
A new method of facial expression synthesis based on geodesic distance and RBF interpolation is proposed in this paper.Owing to the existence of mouth and eye holes in the face model,the Euclidean distance is quite different from the geodesic distance along the surface of the curved surface.The traditional Euclidean distance-based RBF interpolation is easy to produce the result that these holes are stretched. In this paper, an approximate geodesic distance calculation rule is introduced, which can measure the geodesic distance from facial expression feature points to other vertices on the face model.
Emotion editing is an important step in realistic facial expression animation. In this paper, we propose a spatio-temporal facial expression animation editing method. We use Laplacian transformation technology to transmit the editing effect of user's facial expression feature points to the whole face model in the spatial domain. In the editing process, the user is allowed to specify the editing range in the time domain, which provides local control over the editing range of the facial expression animation.
A facial expression editing technique based on two-dimensional deformation is proposed.In facial expression editing, we preserve the shape and proportion of each triangle in the face model to minimize the total deformation.At the same time, according to the observation of facial expression changes, we constrain the sum of the peripheral contours of the face to be invariant in the deformation to synthesize. This technique can also be used in the early stage of fashion design, automatically calculating the shape of the garment in the new posture, so as to obtain the effect of the garment in different postures. The human skeleton is selected as the driving element of clothing deformation, and the control points in the deformation are automatically obtained from the skeleton in the initial state. The target positions of these control points are calculated with the skeleton in the new posture, thus driving the clothing deformation in the new posture.
Several face models are used to test the above methods and good experimental results are obtained. Finally, we summarize the research work, analyze the existing problems and point out the possible research directions in the future.
【學(xué)位授予單位】:浙江大學(xué)
【學(xué)位級(jí)別】:博士
【學(xué)位授予年份】:2012
【分類(lèi)號(hào)】:TP391.41
本文編號(hào):2231219
[Abstract]:As an important branch of computer graphics, facial expression animation technology has been a hot research topic for many researchers. At present, a large number of research results have been achieved in this field, and it is widely used in film, television, advertising and games industries. Facial expression, they show the audience the infinite charm of facial expression animation. The development of facial expression synthesis technology has been deeply rooted in the hearts of people. More and more attention has been paid to the research in the field.
This paper reviews the development of facial expression synthesis, classifies the existing methods and analyzes their advantages and disadvantages in detail. On this basis, we discuss several key issues in realistic facial expression synthesis, and propose a systematic solution, including the collection of facial motion data and the extraction of facial expression. In particular, the work of this paper mainly includes the following aspects:
A high-precision facial expression acquisition and extraction scheme is presented. For the facial motion data collected by optical motion capture system, we use Radial Basis Functions (RBF) interpolation to map it to the coordinate system of the neutral face model to obtain the facial motion data in the space of the neutral face model. Markers, which have nothing to do with facial expression changes, are calibrated in data acquisition, from which we extract the facial expression information of performers and obtain the corresponding head movements.
This paper presents a Laplacian-based expression synthesis technique, which preserves the details of the face model and guarantees the authenticity of the expression. For a given face model, we first compute the Laplacian coordinates of each vertex. The displacement of feature points and the selected fixed points are used to calculate the new positions of all other vertices on the face model, and then the new facial expressions are synthesized.
A new method of facial expression synthesis based on geodesic distance and RBF interpolation is proposed in this paper.Owing to the existence of mouth and eye holes in the face model,the Euclidean distance is quite different from the geodesic distance along the surface of the curved surface.The traditional Euclidean distance-based RBF interpolation is easy to produce the result that these holes are stretched. In this paper, an approximate geodesic distance calculation rule is introduced, which can measure the geodesic distance from facial expression feature points to other vertices on the face model.
Emotion editing is an important step in realistic facial expression animation. In this paper, we propose a spatio-temporal facial expression animation editing method. We use Laplacian transformation technology to transmit the editing effect of user's facial expression feature points to the whole face model in the spatial domain. In the editing process, the user is allowed to specify the editing range in the time domain, which provides local control over the editing range of the facial expression animation.
A facial expression editing technique based on two-dimensional deformation is proposed.In facial expression editing, we preserve the shape and proportion of each triangle in the face model to minimize the total deformation.At the same time, according to the observation of facial expression changes, we constrain the sum of the peripheral contours of the face to be invariant in the deformation to synthesize. This technique can also be used in the early stage of fashion design, automatically calculating the shape of the garment in the new posture, so as to obtain the effect of the garment in different postures. The human skeleton is selected as the driving element of clothing deformation, and the control points in the deformation are automatically obtained from the skeleton in the initial state. The target positions of these control points are calculated with the skeleton in the new posture, thus driving the clothing deformation in the new posture.
Several face models are used to test the above methods and good experimental results are obtained. Finally, we summarize the research work, analyze the existing problems and point out the possible research directions in the future.
【學(xué)位授予單位】:浙江大學(xué)
【學(xué)位級(jí)別】:博士
【學(xué)位授予年份】:2012
【分類(lèi)號(hào)】:TP391.41
【參考文獻(xiàn)】
相關(guān)期刊論文 前3條
1 裴玉茹;查紅彬;;真實(shí)感人臉的形狀與表情空間[J];計(jì)算機(jī)輔助設(shè)計(jì)與圖形學(xué)學(xué)報(bào);2006年05期
2 姚俊峰;陳琪;;計(jì)算機(jī)人臉表情動(dòng)畫(huà)技術(shù)綜述[J];計(jì)算機(jī)應(yīng)用研究;2008年11期
3 吳宗敏;函數(shù)的徑向基表示[J];數(shù)學(xué)進(jìn)展;1998年03期
,本文編號(hào):2231219
本文鏈接:http://sikaile.net/wenyilunwen/guanggaoshejilunwen/2231219.html
最近更新
教材專(zhuān)著