網(wǎng)絡(luò)爬蟲性能提升與功能拓展的研究與實(shí)現(xiàn)
[Abstract]:With the rapid development of the network, the World wide Web has become the carrier of a lot of information. How to extract and utilize the information effectively becomes a huge challenge. In order to meet this demand, the network crawler came into being. It is a program or script that automatically grabs World wide Web information according to certain rules. Firstly, this paper introduces the history of web crawler and its application field. By analyzing the mainstream web crawler, it is found that today's web crawler mainly serves search engine and prepares data resources for subject oriented user query. Based on the highly extensible crawling architecture of web crawlers, the importance of traditional crawlers to search engines has gradually weakened its flexibility and functional characteristics. Then, this paper discusses some indexes to evaluate the performance of web crawlers, and then introduces the optimization strategies of small and medium-sized web crawlers from two aspects of performance improvement and function expansion. In terms of performance improvement, this paper introduces several optimization schemes according to different function modules. First, choose Gzip-deflate compression code transmission to reduce the network transmission time by reducing the amount of transmission; second, asynchronous request download, increase bandwidth utilization and CPU utilization; third, use breadth first crawling, Using Bloom filter to achieve large-scale URL re-detection; fourth, using well-designed regular expressions to extract page links; fifthly, strictly regularizing the URL crawled to reduce the error of URL to the reptile misleading; sixth, The optimized thread pool efficiently manages multithreading. In the aspect of function expansion, this paper mainly tries to distinguish the traditional reptile from the following three aspects. First, static page performance analysis provides performance improvement advice to the website; second, it acts as an automated test tool for performing test cases on a specified page; third, customizable focused data extraction, According to the needs of the user for the specified format of data capture. Based on the verification of the above optimization strategy, the .NET platform is particularly suitable for lightweight crawlers. The crawler is developed in Visual Studio 2008 with C # language based on. Net platform. The program runs in command-line mode and is highly configurable based on files.
【學(xué)位授予單位】:吉林大學(xué)
【學(xué)位級別】:碩士
【學(xué)位授予年份】:2012
【分類號(hào)】:TP391.3
【參考文獻(xiàn)】
相關(guān)期刊論文 前10條
1 李悅;;搜索引擎技術(shù)的產(chǎn)生與發(fā)展綜述[J];福建電腦;2010年05期
2 呂曉峰,董守斌,張凌;并行數(shù)據(jù)采集器任務(wù)分配策略的設(shè)計(jì)與實(shí)現(xiàn)[J];華中科技大學(xué)學(xué)報(bào)(自然科學(xué)版);2003年S1期
3 周源遠(yuǎn),王繼成,鄭剛,張福炎;Web頁面清洗技術(shù)的研究與實(shí)現(xiàn)[J];計(jì)算機(jī)工程;2002年09期
4 周立柱,林玲;聚焦爬蟲技術(shù)研究綜述[J];計(jì)算機(jī)應(yīng)用;2005年09期
5 尹江;尹治本;黃洪;;網(wǎng)絡(luò)爬蟲效率瓶頸的分析與解決方案[J];計(jì)算機(jī)應(yīng)用;2008年05期
6 王華,馬亮,顧明;線程池技術(shù)研究與應(yīng)用[J];計(jì)算機(jī)應(yīng)用研究;2005年11期
7 劉金紅;陸余良;;主題網(wǎng)絡(luò)爬蟲研究綜述[J];計(jì)算機(jī)應(yīng)用研究;2007年10期
8 程嵐嵐;;基于正則表達(dá)式的大規(guī)模網(wǎng)頁術(shù)語對抽取研究[J];情報(bào)雜志;2008年11期
9 許笑;張偉哲;張宏莉;方濱興;;廣域網(wǎng)分布式Web爬蟲[J];軟件學(xué)報(bào);2010年05期
10 鄒志華;陳玉健;劉強(qiáng);;一種維護(hù)WAP網(wǎng)站的網(wǎng)絡(luò)爬蟲的設(shè)計(jì)[J];微計(jì)算機(jī)信息;2006年21期
相關(guān)會(huì)議論文 前1條
1 樸星海;趙鐵軍;鄭德權(quán);張迪;;面向Blog的網(wǎng)絡(luò)爬行器設(shè)計(jì)與實(shí)現(xiàn)[A];中文信息處理前沿進(jìn)展——中國中文信息學(xué)會(huì)二十五周年學(xué)術(shù)會(huì)議論文集[C];2006年
相關(guān)碩士學(xué)位論文 前3條
1 何世林;基于Java技術(shù)的搜索引擎研究與實(shí)現(xiàn)[D];西南交通大學(xué);2006年
2 朱良峰;主題網(wǎng)絡(luò)爬蟲的研究與設(shè)計(jì)[D];南京理工大學(xué);2008年
3 劉喜亮;面向主題的網(wǎng)絡(luò)爬蟲設(shè)計(jì)與實(shí)現(xiàn)[D];湖南大學(xué);2009年
,本文編號(hào):2137449
本文鏈接:http://sikaile.net/kejilunwen/sousuoyinqinglunwen/2137449.html