天堂国产午夜亚洲专区-少妇人妻综合久久蜜臀-国产成人户外露出视频在线-国产91传媒一区二区三区

當前位置:主頁 > 管理論文 > 工程管理論文 >

遙感圖像的語義自動標注方法研究

發(fā)布時間:2018-06-14 22:48

  本文選題:遙感圖像 + 自動標注; 參考:《上海海洋大學》2017年碩士論文


【摘要】:隨著遙感技術向高分辨率、廣覆蓋方向的發(fā)展,遙感圖像的數(shù)據(jù)量持續(xù)增長,迫切要求與獲取速度相適應的管理和理解能力。遙感圖像的語義自動標注是大規(guī)模遙感圖像數(shù)據(jù)管理、理解的關鍵。采用信息技術自動獲取遙感圖像的語義詞,有助于用戶直觀理解圖像內容,完成海量遙感圖像數(shù)據(jù)的高效管理,F(xiàn)有標注方法在遙感圖像語義自動標注過程中存在以下挑戰(zhàn):(1)遙感圖像的空間結構復雜、地理特征信息豐富,很多學者基于遙感圖像的單一特征進行研究,標注精度較低。圖像特征融合有助于遙感圖像內容的準確表達,但是遙感圖像每一維特征并不都與標注精度強相關,部分弱相關的特征會影響遙感圖像語義的標注精度。(2)融合的特征越多,遙感圖像特征數(shù)據(jù)維度越高。隨著遙感圖像數(shù)據(jù)大規(guī)模的增加,傳統(tǒng)的語義標注方法在面對海量高維特征數(shù)據(jù)時難以挖掘特征數(shù)據(jù)之間的規(guī)律,底層特征無法精準反映高層語義概念,使得標注精度受到制約。(3)海洋遙感圖像作為典型的遙感圖像,具有顯著的目標信息稀疏性,即在大尺度的海洋遙感圖像中,關鍵信息往往僅占整個圖像的小部分。另外,在不同觀測尺度下,感興趣的對象所蘊含的語義概念也會不同。使用傳統(tǒng)的語義標注方法,不僅無法準確表達海洋遙感圖像內容,而且還會導致標注性能較差。本文針對遙感圖像復雜結構造成語義自動標注精度較低的問題進行研究,主要工作分成三部分:(1)針對遙感圖像單一特征無法準確描述圖像內容,采用多特征融合的方式來表達遙感圖像。但是簡單融合遙感圖像特征,沒有考慮弱相關的特征對標注精度的影響,提出一種基于權重特征融合標注方法。在不分割遙感圖像前提下,基于HSV空間采用顏色矩方法提取遙感圖像顏色特征,采用共生矩陣方法提取遙感圖像紋理特征,采用尺度不變特征變換方法提取遙感圖像形狀特征。通過遙感圖像每一標注詞每一維特征的標準差來判斷特征的穩(wěn)定性,根據(jù)穩(wěn)定程度計算出其對應的權重系數(shù)。該方法兼顧遙感圖像顏色、紋理、形狀特征,相應調整強、弱相關的特征在遙感圖像特征表達中作用。在公開遙感圖像數(shù)據(jù)集上,采用支持向量機進行遙感圖像的語義自動標注實驗,實驗表明基于權重特征融合的語義標注方法與僅采用遙感圖像單一特征的標注方法相比,精度較高。(2)當融合的特征越多時,遙感圖像特征數(shù)據(jù)維度越高,使得標注精度較低。構建一個以高維融合特征為輸入的基于深度學習遙感圖像語義自動標注模型。模型的第一層使用改進的受限玻爾茲曼機,以適應最優(yōu)權重融合的高維可視特征作為模型的最底層輸入,其他層都是以受限玻爾茲曼機為基礎,通過標注模型的逐層空間轉換,實現(xiàn)從底層到高層特征提取,發(fā)現(xiàn)數(shù)據(jù)分布式特征,從而提高遙感圖像的語義標注精度。將該方法與傳統(tǒng)神經網絡及基于權重融合的遙感圖像標注方法進行對比分析,實驗驗證融合多特征的深度學習遙感圖像標注方法在精度上取得更好效果。(3)由于海洋遙感圖像具有顯著的目標信息稀疏性,使得標注精度受限,提出一種基于深度學習多示例海洋遙感圖像標注模型。首先根據(jù)海洋遙感圖像多尺度特性,利用小波變換得到圖像在不同分辨率下的表達,粗粒度劃分海洋遙感圖像背景區(qū)域和對象區(qū)域,利用多示例表示每一尺度層圖像。計算同一尺度空間的不同示例之間的相似度,通過設定閾值,完成遙感圖像的自適應分割。然后將每一層圖像示例作為深度學習模型的輸入,完成新圖像語義的標注。定量計算標注詞間共現(xiàn)和對立的語義關系,改善圖像標注詞。最后通過實驗驗證所提出方法明顯提高海洋遙感圖像的語義標注精度。
[Abstract]:With the development of remote sensing technology to high resolution and wide coverage direction, the data amount of remote sensing image continues to increase, and the ability to adapt to the management and understanding is urgently required. The semantic automatic tagging of remote sensing images is the key to the management of large-scale remote sensing image data, and the key to understanding the semantic words of remote sensing images automatically by using information technology. It is helpful for the users to understand the content of the image and manage the massive remote sensing image data efficiently. The existing annotation methods have the following challenges in the automatic semantic annotation of remote sensing images: (1) the spatial structure of the remote sensing images is complex and the geographic information is rich. Many scholars are based on the single feature of remote sensing images and the accuracy of the annotation. The image feature fusion helps to accurately express the content of remote sensing image, but the feature of each one dimension of remote sensing image is not all closely related to the accuracy of the annotation. Some weak correlation features will affect the accuracy of the semantic annotation of remote sensing image. (2) the more features of the fusion, the higher the feature data dimension of remote sensing images, and the large scale of remote sensing image data. The traditional semantic annotation method is difficult to excavate the rules between the feature data when facing the massive high dimensional feature data. The underlying features can not accurately reflect the high level semantic concepts and make the labeling precision restricted. (3) the ocean remote sensing image, as a typical remote sensing image, has significant target information sparsity, that is, in large scale. In the ocean remote sensing images, the key information often accounts for only a small part of the whole image. In addition, the semantic concepts of the objects of interest are different at different observational scales. Using the traditional semantic annotation method, it can not only express the content of the ocean remote sensing image accurately, but also lead to the poor performance of the annotation. This paper is aimed at remote sensing map. The main work is divided into three parts: (1) the image content can not be described accurately by the single feature of the remote sensing image, and the multi feature fusion method is used to express the remote sensing image. However, the feature of the remote sensing image is simply fused, and the character of the weak correlation is not considered for the accuracy of the tagging. In the premise of non segmentation of remote sensing images, a color moment method is used to extract the color features of remote sensing images on the premise of non segmentation of remote sensing images. The texture features of remote sensing images are extracted by the symbiotic matrix method, and the shape features of remote sensing images are extracted by the scale invariant feature transform method, and the remote sensing images are extracted by remote sensing images. According to the standard deviation of each one dimension characteristic of each label, the corresponding weight coefficient is calculated according to the stability degree. This method takes both the color, texture and shape features of the remote sensing image into consideration, and the corresponding adjustment is strong and the weak correlation function is used in the feature expression of remote sensing image. The support direction is adopted on the open remote sensing image data set. The experiment of semantic automatic tagging of remote sensing images shows that the method of semantic annotation based on the fusion of weight features is more accurate than that using only the single feature of remote sensing image. (2) when the more features of the fusion, the higher the feature data dimension of the remote sensing image, and the lower accuracy of the annotation. The first layer of the model uses an improved limited Boltzmann machine to adapt the high dimensional visual feature of the optimal weight fusion as the bottom input of the model, and the other layer is based on the restricted Boltzmann machine, and the layer by layer space transformation of the tagged model, the first layer of the model. The feature extraction from the bottom to the high level is realized and the distributed feature of the data is found to improve the semantic annotation accuracy of remote sensing images. The method is compared with the traditional neural network and the method of remote sensing image annotation based on the weight fusion. The experiment verifies that the precision of the deep learning remote sensing image annotation with multiple features is more accurate. Good results. (3) because the ocean remote sensing image has significant target information sparsity, which makes the marking precision limited, a multi example ocean remote sensing image annotation model based on depth learning is proposed. Firstly, according to the multi-scale characteristics of the ocean remote sensing image, the image is expressed in different resolution by using the wavelet transform, and the coarse granularity of the ocean is divided into the ocean. The background area and object region of remote sensing image represent the image of each scale layer by multi example. The similarity between different examples of the same scale space is calculated and the adaptive segmentation of remote sensing image is completed by setting the threshold. Then each layer of image example is used as the input of the depth learning model to complete the tagging of the new image semantics. The semantic relation between the concurrence and opposites of the annotation words is calculated to improve the image annotation words. Finally, the proposed method is proved to improve the semantic annotation accuracy of the ocean remote sensing images.
【學位授予單位】:上海海洋大學
【學位級別】:碩士
【學位授予年份】:2017
【分類號】:TP751

【參考文獻】

相關期刊論文 前10條

1 聶建豪;李士進;;基于圖像識別的秸稈焚燒事件檢測[J];計算機技術與發(fā)展;2017年05期

2 袁海聰;李松斌;鄧浩江;;一種基于多特征融合的二維人臉欺詐檢測方法[J];計算機應用與軟件;2017年02期

3 韓丁;武佩;張強;韓國棟;通霏;;基于顏色矩的典型草原牧草特征提取與圖像識別[J];農業(yè)工程學報;2016年23期

4 黎健成;袁春;宋友;;基于卷積神經網絡的多標簽圖像自動標注[J];計算機科學;2016年07期

5 薛俊韜;縱蘊瑞;楊正瓴;;基于改進的YCbCr空間及多特征融合的手勢識別[J];計算機應用與軟件;2016年01期

6 王亞星;齊林;郭新;陳恩慶;;基于稀疏PCA的多階次分數(shù)階傅里葉變換域特征人臉識別[J];計算機應用研究;2016年04期

7 楊陽;張文生;;基于深度學習的圖像自動標注算法[J];數(shù)據(jù)采集與處理;2015年01期

8 呂啟;竇勇;牛新;徐佳慶;夏飛;;基于DBN模型的遙感圖像分類[J];計算機研究與發(fā)展;2014年09期

9 鄭歆慰;胡巖峰;孫顯;王宏琦;;基于空間約束多特征聯(lián)合稀疏編碼的遙感圖像標注方法研究[J];電子與信息學報;2014年08期

10 張少博;全書海;石英;楊陽;李云路;程姝;;基于顏色矩的圖像檢索算法研究[J];計算機工程;2014年06期

相關博士學位論文 前1條

1 宋海玉;自動圖像標注及標注改善算法的研究[D];吉林大學;2012年

相關碩士學位論文 前6條

1 李艷;基于連續(xù)預測的半監(jiān)督學習圖像語義標注研究[D];安徽大學;2015年

2 王鳳姣;多特征融合的圖像語義提取與分析[D];華中師范大學;2014年

3 王川川;網絡社區(qū)圖像檢索中的排序研究[D];山東大學;2011年

4 李關龍;多角度遙感圖像三維信息提取及可視化研究[D];哈爾濱工業(yè)大學;2010年

5 何希圣;圖像自動標注方法研究[D];復旦大學;2010年

6 劉鵬宇;基于內容的圖像特征提取算法的研究[D];吉林大學;2004年

,

本文編號:2019269

資料下載
論文發(fā)表

本文鏈接:http://sikaile.net/guanlilunwen/gongchengguanli/2019269.html


Copyright(c)文論論文網All Rights Reserved | 網站地圖 |

版權申明:資料由用戶b49b9***提供,本站僅收錄摘要或目錄,作者需要刪除請E-mail郵箱bigeng88@qq.com