天堂国产午夜亚洲专区-少妇人妻综合久久蜜臀-国产成人户外露出视频在线-国产91传媒一区二区三区

網(wǎng)頁視覺注意的相關(guān)研究

發(fā)布時(shí)間:2018-12-27 18:36
【摘要】:眾所周知,我們?nèi)祟惖难劬γ刻於紩?huì)接收大量的視覺信息,憑借著我們高效的視覺注意系統(tǒng),我們可以對(duì)接收而來的大量視覺信號(hào)進(jìn)行選擇與過濾,刪除其中冗余的部分,將最重要的信息通過神經(jīng)系統(tǒng)傳遞給我們的大腦,以便進(jìn)行下一步的處理。在現(xiàn)代,眾多科研工作者也正在試圖將人類這種高效的視覺注意機(jī)制應(yīng)用到電子計(jì)算機(jī)中,目的是使得電子計(jì)算機(jī)能夠模擬人類的視覺注意系統(tǒng),從而幫助人類進(jìn)行更高級(jí)別、更加智能化的處理任務(wù)。目前,針對(duì)自然場(chǎng)景的視覺注意預(yù)測(cè)模型被相繼的提出來,但是針對(duì)網(wǎng)頁這種非自然場(chǎng)景的視覺注意方法卻鮮有人研究。由于網(wǎng)頁經(jīng)常是圖片、文本、商標(biāo)、廣告等的結(jié)合體,它與普通的圖片相比擁有更豐富的視覺信息,另外,人們?yōu)g覽網(wǎng)頁的方式與普通圖片相比也會(huì)有所不同,這使得以往的針對(duì)自然場(chǎng)景的傳統(tǒng)顯著性預(yù)測(cè)模型失去了效力。因此,本文將視覺注意的應(yīng)用研究重點(diǎn)放在了網(wǎng)頁上,并提出適用于網(wǎng)頁的視覺注意模型。本文的主要工作有如下三個(gè)方面:首先,本文提出了一個(gè)針對(duì)網(wǎng)頁視覺注意研究的標(biāo)注數(shù)據(jù)庫——WSP300 (Webpage Saliency Prediction 300),共收集網(wǎng)頁圖片300張,為了探究不同用途的網(wǎng)頁對(duì)人眼注視區(qū)域的影響,我們選取了購物類素材共有116張,新聞?lì)愃夭墓灿?05張,社交和其它類別的素材79張。該數(shù)據(jù)庫是對(duì)當(dāng)前網(wǎng)頁的視覺注意研究數(shù)據(jù)庫的重要補(bǔ)充,并為本文之后建立的網(wǎng)頁視覺注意模型提供了實(shí)驗(yàn)基礎(chǔ)與數(shù)據(jù)支持。其次,本文提出了一種基于多特征融合的網(wǎng)頁視覺注意的預(yù)測(cè)模型。該模型根據(jù)網(wǎng)頁與普通圖片共性與差異性,首先提出了適用于網(wǎng)頁的自底向上的顯著性特征,然后利用特征映射的方法得到獨(dú)立的各特征向量,之后用機(jī)器學(xué)習(xí)的方法對(duì)這些特征向量進(jìn)行訓(xùn)練,將提出來的特征進(jìn)行有效的融合,最終得到適用于網(wǎng)頁的視覺注意預(yù)測(cè)圖(顯著圖)。最后,本文提出了一種基于卷積神經(jīng)網(wǎng)絡(luò)的視覺注意的預(yù)測(cè)模型。在該模型中,我們考慮了網(wǎng)頁不僅與自底向上的驅(qū)動(dòng)有關(guān),也同樣考慮到網(wǎng)頁受到自頂向下驅(qū)動(dòng)的影響。因此,我們使用全卷積神經(jīng)網(wǎng)絡(luò)(Full Convolution Network,FCN)來提取網(wǎng)頁的高級(jí)語義信息,并與自底向上的特征進(jìn)行融合。在WSP300數(shù)據(jù)庫的實(shí)驗(yàn)上也證明了該模型的有效性。綜上所述,本文一是建立了一個(gè)針對(duì)網(wǎng)頁的視覺注意注視點(diǎn)的標(biāo)注數(shù)據(jù)庫,二是提出了兩種適用于網(wǎng)頁的視覺注意預(yù)測(cè)模型,并且通過實(shí)驗(yàn)表明,以上兩個(gè)視覺注意模型在網(wǎng)頁的視覺注意預(yù)測(cè)上比當(dāng)前主流的視覺注意模型擁有更好的效果。
[Abstract]:As we all know, our human eyes receive a lot of visual information every day. With our efficient visual attention system, we can select and filter a large number of visual signals and remove the redundant parts. Pass the most important information through the nervous system to our brain for the next step. In modern times, many researchers are also trying to apply this efficient visual attention mechanism to electronic computers, in order to enable computers to simulate human visual attention systems, thus helping people to carry out higher levels. A more intelligent task. At present, visual attention prediction models for natural scenes have been proposed one after another, but the visual attention methods of non-natural scenes such as web pages are rarely studied. Because web pages are often a combination of pictures, text, trademarks, advertisements, and so on, they have more visual information than ordinary pictures. In addition, the way people browse the web is also different from that of ordinary pictures. This makes the traditional significant prediction models for natural scenes ineffective. Therefore, this paper focuses on the application of visual attention on web pages, and proposes a visual attention model for web pages. The main work of this paper is as follows: firstly, this paper proposes a WSP300 (Webpage Saliency Prediction database for the visual attention research of web pages. In order to explore the impact of different web pages on the eye watching area, we selected 116 materials for shopping, 105 for news and 79 for social and other categories. This database is an important supplement to the visual attention research database of current web pages, and provides the experimental basis and data support for the visual attention model of web pages established later in this paper. Secondly, this paper presents a prediction model of visual attention based on multi-feature fusion. According to the common features and differences between web pages and common images, the model firstly presents the bottom-up salient features suitable for web pages, and then obtains independent feature vectors by using feature mapping method. Then these feature vectors are trained by machine learning method, and the proposed features are fused effectively. Finally, the visual attention prediction map (salient image) suitable for web pages is obtained. Finally, a prediction model of visual attention based on convolution neural network is proposed. In this model, we consider that web pages are affected not only by bottom-up drivers, but also by top-down drivers. Therefore, we use full convolution neural network (Full Convolution Network,FCN) to extract advanced semantic information from web pages and combine them with bottom-up features. The validity of the model is also proved in the experiment of WSP300 database. To sum up, the first part of this paper is to establish a database of visual attention and fixation points for web pages, the other is to put forward two kinds of visual attention prediction models for web pages, and the experiments show that, The above two visual attention models are more effective than the current mainstream visual attention models in the prediction of visual attention on web pages.
【學(xué)位授予單位】:北京郵電大學(xué)
【學(xué)位級(jí)別】:碩士
【學(xué)位授予年份】:2017
【分類號(hào)】:TP391.41;TP393.092

【相似文獻(xiàn)】

相關(guān)博士學(xué)位論文 前4條

1 方云華;基于功能性磁共振技術(shù)探討視覺注意功能與中醫(yī)體質(zhì)及年齡相關(guān)性的神經(jīng)機(jī)制研究[D];福建中醫(yī)藥大學(xué);2017年

2 王曉萌;基于特征融合的視覺關(guān)注算法研究[D];中國礦業(yè)大學(xué)(北京);2017年

3 葉志鵬;基于語義分析的場(chǎng)景分類方法研究[D];哈爾濱工業(yè)大學(xué);2017年

4 伍博;基于顯著性的視覺目標(biāo)跟蹤研究[D];電子科技大學(xué);2017年

相關(guān)碩士學(xué)位論文 前10條

1 李劍;網(wǎng)頁視覺注意的相關(guān)研究[D];北京郵電大學(xué);2017年

2 劉楚驍;基于代價(jià)敏感方法的垃圾網(wǎng)頁欺詐檢測(cè)[D];西南交通大學(xué);2017年

3 王大浩;網(wǎng)頁惡意代碼檢測(cè)技術(shù)研究與實(shí)現(xiàn)[D];北京郵電大學(xué);2017年

4 黃夢(mèng)賢;天津大學(xué)海洋科學(xué)與技術(shù)學(xué)院網(wǎng)頁翻譯實(shí)踐報(bào)告[D];天津大學(xué);2016年

5 騰飛;視覺元素在網(wǎng)頁設(shè)計(jì)中的創(chuàng)新與運(yùn)用[D];吉林藝術(shù)學(xué)院;2017年

6 陳鎮(zhèn);微信公眾號(hào)平臺(tái)中的視覺構(gòu)成優(yōu)化探析[D];湖北美術(shù)學(xué)院;2017年

7 胡金戈;基于視覺中心轉(zhuǎn)移的視覺顯著性檢測(cè)方法研究[D];西南大學(xué);2017年

8 屈安琪;自媒體視域下大學(xué)文化視覺表征的構(gòu)建研究[D];中國礦業(yè)大學(xué);2017年

9 陳麗麗;織物視覺遮蔽性測(cè)試方法研究[D];東華大學(xué);2017年

10 崔伯瑞;交互設(shè)計(jì)中的視覺工效研究[D];北京郵電大學(xué);2017年

,

本文編號(hào):2393447

資料下載
論文發(fā)表

本文鏈接:http://sikaile.net/guanlilunwen/ydhl/2393447.html


Copyright(c)文論論文網(wǎng)All Rights Reserved | 網(wǎng)站地圖 |

版權(quán)申明:資料由用戶31172***提供,本站僅收錄摘要或目錄,作者需要?jiǎng)h除請(qǐng)E-mail郵箱bigeng88@qq.com