天堂国产午夜亚洲专区-少妇人妻综合久久蜜臀-国产成人户外露出视频在线-国产91传媒一区二区三区

當(dāng)前位置:主頁 > 碩博論文 > 信息類博士論文 >

自然場景的3D深度恢復(fù)及應(yīng)用研究

發(fā)布時間:2018-05-25 16:53

  本文選題:立體顯示 + 深度恢復(fù) ; 參考:《天津大學(xué)》2015年博士論文


【摘要】:隨著顯示技術(shù)的快速發(fā)展以及人類日常生活需求的增長,3D立體顯示技術(shù)已然掀起了圖像圖形顯示領(lǐng)域的一場新技術(shù)革命,成為影視及影像行業(yè)的最新、最前沿的高新技術(shù),它以新、特、奇的表現(xiàn)手法,真實而強(qiáng)烈地視覺沖出力,良好優(yōu)美的環(huán)境感染力吸引著人們的目光。同時,3D顯示技術(shù)在各行各業(yè)中都得到了實際有效的應(yīng)用。但是現(xiàn)存的大部分圖片及視頻仍然是2D的,而且市場上仍然沒有非常廉價的適用于個人使用的3D采集設(shè)備。怎么恢復(fù)這些2D圖片或視頻的3D深度信息成為3D立體顯示領(lǐng)域的一項重要任務(wù)。在獲得這些2D場景的深度信息之后即可輕易的將2D場景轉(zhuǎn)換成3D場景。本文圍繞2D場景的深度恢復(fù)及應(yīng)用,研究了利用散焦線索對單幅圖像進(jìn)行深度圖的恢復(fù)、基于圖像結(jié)構(gòu)的深度圖平滑修正、結(jié)合深度圖的RGB圖像顯著性目標(biāo)檢測、利用3D深度信息的在線人類動作識別等重要問題。主要創(chuàng)新點包括:1.提出了一種利用圖像散焦線索恢復(fù)場景3D深度信息的方法,通過圖像的局部區(qū)域頻譜幅度對比度來建立場景深度恢復(fù)模型?紤]到恢復(fù)出的深度圖中可能存在噪音點,本文提出一種基于總變分的圖像保邊緣平滑算法,用以平滑恢復(fù)出的初始深度圖。使得最終恢復(fù)出的深度圖紋理區(qū)域更加平滑,因此本文恢復(fù)的深度圖更加適合于2D到3D轉(zhuǎn)換以及其它方面的應(yīng)用。2.利用圖像的背景先驗和顏色的空間分布,首先提出了一種RGB圖像的顯著性檢測。另外,本文將深度圖信息融入到矩陣的低秩恢復(fù)模型,提出了一種RGBD圖像的顯著性檢測方法。相比于以往的主流算法,我們在不同的RGB圖像數(shù)據(jù)集和RGBD圖像數(shù)據(jù)集上都取得了更好的結(jié)果。本文的結(jié)果有更高的準(zhǔn)確率和召回率,因此,本文檢測出的顯著圖結(jié)果能夠更好的用于圖像的后續(xù)處理,如圖像分割、基于內(nèi)容的圖像編輯等。3.從實際應(yīng)用的角度出發(fā),本文提出了一種基于深度信息的在線人類行為識別算法,通過協(xié)方差矩陣對每一幀進(jìn)行特征描述相,利用核化的SVM和最鄰近搜索算法實現(xiàn)分類。相比于以往基于片段的行為識別方法,本文提出的方法更具實用性。相比于以往的在線行為識別方法,本文的方法具有更高的準(zhǔn)確率、更低的時延。
[Abstract]:With the rapid development of display technology and the growth of human daily life demand, 3D stereoscopic display technology has set off a new technological revolution in the field of image and graphics display, and has become the latest and most advanced technology in the film and video industry. It attracts people's attention with its new, special, strange expression, real and strong visual impact, good beautiful environment appeal. At the same time, 3D display technology has been effectively applied in various industries. But most of the existing images and videos are still 2D, and there is still no very cheap 3D collection device for personal use. How to restore 3D depth information of 2D images or videos becomes an important task in 3D stereoscopic display field. After obtaining the depth information of these 2 D scenes, you can easily convert 2 D scenes into 3 D scenes. Based on the depth restoration and application of 2D scene, this paper studies the restoration of a single image by defocusing clues. The depth map is smoothed based on the image structure, and the salient target detection of RGB image is combined with the depth map. Online recognition of human motion using 3D depth information and other important issues. The main innovations include: 1. In this paper, a method of restoring 3D depth information of scene by defocusing cues is proposed, and the model of scene depth recovery is established by contrast of local region spectrum amplitude. Considering that there may be noise points in the restored depth map, this paper presents an image edge preserving smoothing algorithm based on total variation to smooth the restored initial depth map. So the depth map restored in this paper is more suitable for 2D to 3D conversion and other applications. Based on background priori and color spatial distribution of images, a salience detection method for RGB images is proposed. In addition, the depth map information is incorporated into the low rank recovery model of the matrix, and a significance detection method for RGBD images is proposed. Compared with the previous mainstream algorithms, we have obtained better results on different RGB image datasets and RGBD image datasets. The results of this paper have higher accuracy and recall rate. Therefore, the salient map can be better used in image processing, such as image segmentation, content-based image editing, and so on. From the point of view of practical application, an online human behavior recognition algorithm based on depth information is proposed in this paper. Each frame is characterized by covariance matrix, and the kernel SVM and nearest neighbor search algorithm are used to classify each frame. Compared with the previous segment-based behavior recognition methods, the proposed method is more practical. Compared with the previous online behavior recognition methods, the proposed method has higher accuracy and lower delay.
【學(xué)位授予單位】:天津大學(xué)
【學(xué)位級別】:博士
【學(xué)位授予年份】:2015
【分類號】:TP391.41

【相似文獻(xiàn)】

相關(guān)期刊論文 前10條

1 沈國鍵,,夏啟明,李瑛;基于知識自適應(yīng)圖像平滑[J];武漢測繪科技大學(xué)學(xué)報;1995年02期

2 林方特;;幾種常見圖像平滑技術(shù)的研究[J];才智;2009年04期

3 楊朝輝;張大鵬;李乃民;;人體生理圖像與病理舌紋圖像[J];哈爾濱工業(yè)大學(xué)學(xué)報;2009年12期

4 郭琦;;圖像平滑的2維小波插值方法[J];中國圖象圖形學(xué)報;2010年10期

5 楊平先,孫興波;基于粗集方法改進(jìn)的圖像平滑[J];電訊技術(shù);2003年03期

6 王洪亮,曹蘇明,劉國平,吳建華;改進(jìn)的梯度倒數(shù)加權(quán)算法在圖像平滑中的應(yīng)用[J];紅外技術(shù);2003年04期

7 郭海濤,田坦,王連玉,閆宏生;利用粗集理論的聲吶圖像平滑[J];海洋技術(shù);2005年02期

8 呂振肅,魏弘博,劉R

本文編號:1933959


資料下載
論文發(fā)表

本文鏈接:http://sikaile.net/shoufeilunwen/xxkjbs/1933959.html


Copyright(c)文論論文網(wǎng)All Rights Reserved | 網(wǎng)站地圖 |

版權(quán)申明:資料由用戶1fcd7***提供,本站僅收錄摘要或目錄,作者需要刪除請E-mail郵箱bigeng88@qq.com
国产精品白丝一区二区| 成人精品欧美一级乱黄| 熟妇人妻av中文字幕老熟妇| 国产精品十八禁亚洲黄污免费观看 | 国产日韩欧美综合视频| 国产在线观看不卡一区二区| 精品熟女少妇一区二区三区| 日韩女优精品一区二区三区| 国产又粗又猛又黄又爽视频免费| 成人午夜激情在线免费观看| 91人妻人人做人碰人人九色| 国产日韩欧美综合视频| 欧美精品久久99九九| 精品人妻一区二区三区免费看 | 精品香蕉一区二区在线| 高清亚洲精品中文字幕乱码| 日韩成人动作片在线观看| 国产精品人妻熟女毛片av久久| 熟妇久久人妻中文字幕| 亚洲中文字幕亲近伦片| 国产一区一一一区麻豆| 国产又色又爽又黄的精品视频| 免费在线播放不卡视频| 久热青青草视频在线观看| 免费在线播放一区二区| 国产又粗又长又爽又猛的视频| 国产又粗又长又爽又猛的视频| 很黄很污在线免费观看| 色婷婷在线精品国自产拍| 少妇人妻一级片一区二区三区| 丰满少妇被猛烈撞击在线视频| 久久女同精品一区二区| 久久这里只精品免费福利| 欧美日韩成人在线一区| 成人精品亚洲欧美日韩| 欧美成人精品国产成人综合| 亚洲欧美日韩国产自拍| 欧美一级黄片欧美精品| 日韩成人高清免费在线| 国产日韩欧美在线亚洲| 超薄丝袜足一区二区三区|