未來(lái)網(wǎng)絡(luò)信息分發(fā)技術(shù)研究
發(fā)布時(shí)間:2017-12-30 18:24
本文關(guān)鍵詞:未來(lái)網(wǎng)絡(luò)信息分發(fā)技術(shù)研究 出處:《北京郵電大學(xué)》2015年碩士論文 論文類型:學(xué)位論文
更多相關(guān)文章: 未來(lái)網(wǎng)絡(luò) 數(shù)據(jù)分發(fā) 緩存
【摘要】:隨著人們?cè)絹?lái)越依賴互聯(lián)網(wǎng)來(lái)獲取信息,互聯(lián)網(wǎng)所承受的壓力也越來(lái)越大。從用戶的角度來(lái)看,網(wǎng)絡(luò)數(shù)據(jù)的訪問存在著時(shí)間和空間上的相關(guān)性,當(dāng)某區(qū)域內(nèi)有用戶訪問過一個(gè)內(nèi)容后,該區(qū)域的其他用戶很可能再次訪問該內(nèi)容,如果每次都從原始數(shù)據(jù)源獲取內(nèi)容,會(huì)造成同一內(nèi)容地重復(fù)傳輸,浪費(fèi)網(wǎng)絡(luò)帶寬,同時(shí)對(duì)源服務(wù)器也是很大的壓力。網(wǎng)絡(luò)緩存技術(shù)的出現(xiàn)在一定程度上緩解了上述問題。隨著人們對(duì)網(wǎng)絡(luò)的使用從基于主機(jī)位置(IP地址)向基于內(nèi)容的逐漸轉(zhuǎn)變,網(wǎng)絡(luò)中緩存的重要性越來(lái)越明顯。以內(nèi)容為中心的新型未來(lái)網(wǎng)絡(luò)架構(gòu)應(yīng)運(yùn)而生。未來(lái)網(wǎng)絡(luò)以信息的分發(fā)和獲取為主要目的,信息內(nèi)容分發(fā)技術(shù)是未來(lái)網(wǎng)絡(luò)研究領(lǐng)域的一個(gè)核心問題。因此本論文重點(diǎn)研究了未來(lái)網(wǎng)絡(luò)中的信息分發(fā)技術(shù)。 近年來(lái),未來(lái)網(wǎng)絡(luò)領(lǐng)域的研究已經(jīng)有很多,在數(shù)據(jù)分發(fā)技術(shù)方面的研究主要集中在網(wǎng)絡(luò)內(nèi)緩存相關(guān)的問題,未來(lái)網(wǎng)絡(luò)特別是信息中心網(wǎng)絡(luò)(Information-Centric Networking, ICN)中的緩存具有普遍存在、透明和細(xì)粒度的特點(diǎn),在未來(lái)網(wǎng)絡(luò)中構(gòu)成了一個(gè)網(wǎng)絡(luò)內(nèi)置緩存網(wǎng)絡(luò),對(duì)于數(shù)據(jù)分發(fā)的性能有很大的影響,這方面研究的熱點(diǎn)包括網(wǎng)絡(luò)中各個(gè)結(jié)點(diǎn)緩存大小的設(shè)置、內(nèi)容在哪些結(jié)點(diǎn)緩存的緩存決策策略、緩存滿時(shí)更新哪些內(nèi)容的緩存替換策略以及結(jié)點(diǎn)間的協(xié)作緩存策略等。但是現(xiàn)在的研究大多還局限在有線場(chǎng)景和靜態(tài)拓?fù)涞那闆r下,對(duì)于移動(dòng)場(chǎng)景特別是車載網(wǎng)這種快速移動(dòng)的動(dòng)態(tài)拓?fù)鋱?chǎng)景下的緩存技術(shù)的研究還很少。 本文針對(duì)未來(lái)網(wǎng)絡(luò)特別是ICN在移動(dòng)場(chǎng)景下的內(nèi)容分發(fā)問題,設(shè)計(jì)了一種基于社會(huì)關(guān)系的協(xié)作緩存機(jī)制。該協(xié)作緩存機(jī)制能夠提高移動(dòng)網(wǎng)絡(luò)中結(jié)點(diǎn)間的鏈路穩(wěn)定性,保證數(shù)據(jù)分發(fā)的質(zhì)量,同時(shí)擴(kuò)大內(nèi)容發(fā)現(xiàn)的范圍和效率,提高結(jié)點(diǎn)稀疏環(huán)境下的內(nèi)容分發(fā)效率,進(jìn)而提高資源利用率,改善用戶體驗(yàn)。本論文基于ndnSIM搭建仿真平臺(tái),針對(duì)車載網(wǎng)高速移動(dòng)結(jié)點(diǎn)場(chǎng)景對(duì)仿真環(huán)境中的模塊進(jìn)行了更改,使之更加符合高速移動(dòng)場(chǎng)景下的需要,然后將ICN中的普遍緩存策略和本論文提出的協(xié)作緩存機(jī)制進(jìn)行了仿真對(duì)比,證明了協(xié)作緩存機(jī)制在未來(lái)網(wǎng)絡(luò)數(shù)據(jù)分發(fā)過程中的有效性和優(yōu)越性。最后,基于對(duì)全文總結(jié)的基礎(chǔ)上,指出了本文工作的不足和可以改進(jìn)的地方。
[Abstract]:As people rely more and more on the Internet to obtain information, the pressure on the Internet is also increasing. From the perspective of users, the access of network data has time and space relevance. When a user in a region has access to a content, other users in that area are likely to access the content again, and if the content is retrieved from the original data source each time, the same content will be transferred repeatedly. Wasting network bandwidth. At the same time, there is also a lot of pressure on the source server. The emergence of network caching technology alleviates the above problem to some extent. As people use the network from host location to IP address). A gradual shift to content-based. The importance of cache in the network is becoming more and more obvious. A new content-centered future network architecture emerges as the times require. The main purpose of the future network is to distribute and obtain information. The information content distribution technology is a core problem in the future network research, so this paper focuses on the information distribution technology in the future network. In recent years, there are a lot of research in the field of network in the future. The research on data distribution technology is mainly focused on the issues related to cache in the network. Caching in future networks, especially information center networks, is ubiquitous in Information-Centric networking (ICNs). The characteristics of transparency and fine granularity constitute a network with built-in cache network in the future, which has a great impact on the performance of data distribution. The research focus in this area includes the setting of the cache size of each node in the network and the cache decision strategy of which nodes are cached. Cache replacement policy and cooperative cache policy among nodes are updated when cache is full. However, most of the current researches are limited to wired scenarios and static topology. There is little research on cache technology in mobile scene, especially in vehicle network, which is a fast moving dynamic topology scenario. This paper aims at the content distribution of future networks, especially ICN in mobile scenarios. A cooperative caching mechanism based on social relations is designed, which can improve the link stability between nodes in mobile network, ensure the quality of data distribution, and expand the scope and efficiency of content discovery. This paper builds a simulation platform based on ndnSIM to improve the efficiency of content distribution, resource utilization and user experience. The module in the simulation environment is changed for the high-speed mobile node scene of the vehicle network to make it more suitable for the needs of the high-speed mobile scene. Then the general cache strategy in ICN and the cooperative cache mechanism proposed in this paper are simulated and compared, which proves the effectiveness and superiority of the cooperative cache mechanism in the future network data distribution process. Finally. Based on the summary of the paper, this paper points out the shortcomings of the work and the possible improvement.
【學(xué)位授予單位】:北京郵電大學(xué)
【學(xué)位級(jí)別】:碩士
【學(xué)位授予年份】:2015
【分類號(hào)】:TP393.02
【參考文獻(xiàn)】
相關(guān)期刊論文 前2條
1 謝高崗;張玉軍;李振宇;孫毅;謝應(yīng)科;李忠誠(chéng);劉韻潔;;未來(lái)互聯(lián)網(wǎng)體系結(jié)構(gòu)研究綜述[J];計(jì)算機(jī)學(xué)報(bào);2012年06期
2 顧中立;;互聯(lián)網(wǎng)節(jié)能:一個(gè)不容忽視的問題[J];上海信息化;2006年10期
,本文編號(hào):1355919
本文鏈接:http://sikaile.net/guanlilunwen/ydhl/1355919.html
最近更新
教材專著