天堂国产午夜亚洲专区-少妇人妻综合久久蜜臀-国产成人户外露出视频在线-国产91传媒一区二区三区

當(dāng)前位置:主頁 > 科技論文 > 自動化論文 >

基于深度學(xué)習(xí)的目標(biāo)檢測系統(tǒng)的研發(fā)

發(fā)布時間:2018-05-20 18:13

  本文選題:目標(biāo)檢測 + 深度學(xué)習(xí) ; 參考:《首都經(jīng)濟(jì)貿(mào)易大學(xué)》2017年碩士論文


【摘要】:計算機(jī)科學(xué)的飛速發(fā)展,給人類的生活帶來了很大的進(jìn)步,使人類的生活變得越來越智能。人工智能一直是人類孜孜不倦探索得重要領(lǐng)域。眾所周知,人類的視覺是感知外部世界的重要組成部分,科學(xué)研究表明,人的百分之七八十的信息是通過視覺來感知到的。所以,在人類探索人工智能的漫漫長途中,計算機(jī)視覺一直都是一個重要的研究方向。計算機(jī)視覺涉及到圖像處理,機(jī)器學(xué)習(xí),模式識別等多個學(xué)科,最終目的是為了模擬人的視覺,以便用計算機(jī)完成各種識別任務(wù)。其中,目標(biāo)檢測是計算機(jī)視覺方向中非常重要的一個子方向。目標(biāo)檢測主要是檢測出圖片中所關(guān)注的目標(biāo),例如,自動駕駛系統(tǒng)對于目標(biāo)檢測的要求就是要檢測出當(dāng)前行車環(huán)境中的行人、車輛等各種物體。由于真實路況的復(fù)雜性,要求檢測系統(tǒng)對于場景有著較高級別的語義理解。過去,大部分目標(biāo)檢測算法基本是基于傳統(tǒng)的濾波方法,提取人工設(shè)計出來的經(jīng)典特征,如SIFT[22],HOG[2],然后放入經(jīng)典的分類器(如SVM[30]、Adaboost[29])進(jìn)行分類識別。由于使用的是手工特征,所以魯棒性較差,而且工作量大,當(dāng)環(huán)境出現(xiàn)明顯變化時,目標(biāo)檢測效果的差異很大。由于深度學(xué)習(xí)中卷積神經(jīng)網(wǎng)絡(luò)的極強(qiáng)的特征表達(dá)能力,提取的特征具有非常強(qiáng)的魯棒性,所以,本文主要是利用了基于深度學(xué)習(xí)的比較經(jīng)典的檢測框架—Faster R-CNN[5],并在此基礎(chǔ)上嘗試著使用不同的特征提取層,在傳統(tǒng)經(jīng)典模型的基礎(chǔ)上,對網(wǎng)絡(luò)結(jié)構(gòu)進(jìn)行了改變,使現(xiàn)在的網(wǎng)絡(luò)模型在精度和速度之間作了更好的權(quán)衡。并利用標(biāo)定的數(shù)據(jù)對模型進(jìn)行訓(xùn)練,調(diào)節(jié)參數(shù),最終訓(xùn)練出一個精度和速度較好的模型,并應(yīng)用到檢測系統(tǒng)中。本文的目標(biāo)檢測系統(tǒng)的開發(fā)環(huán)境為Linux,利用專注圖像界面的Qt圖形界面庫作為界面的開發(fā)框架,底層使用了C++語言。本文中所描述的目標(biāo)檢測系統(tǒng)開發(fā)過程主要包括整體的需求分析、總體的設(shè)計與實現(xiàn)和測試等。最后通過測試,證明系統(tǒng)在硬件和性能上都有著良好的表現(xiàn)。
[Abstract]:The rapid development of computer science brings great progress to human life and makes human life more intelligent. Artificial intelligence has always been an important field for human beings to explore tirelessly. As we all know, human vision is an important part of the perception of the external world. Scientific research shows that 70% of human information is perceived through vision. Therefore, computer vision has always been an important research direction in the long-distance exploration of artificial intelligence. Computer vision involves many subjects, such as image processing, machine learning, pattern recognition and so on. Among them, target detection is a very important sub-direction in the direction of computer vision. Target detection is mainly to detect the object concerned in the picture. For example, the requirement of automatic driving system for target detection is to detect all kinds of objects such as pedestrians, vehicles and so on in the current driving environment. Because of the complexity of the real road conditions, the detection system is required to have a higher level of semantic understanding of the scene. In the past, most of the target detection algorithms were based on traditional filtering methods to extract the classical features, such as SIFT [22] Hog [2], and then put them into classical classifiers (such as SVM [30] / Adaboost [29]) for classification and recognition. Because the manual feature is used, the robustness is poor, and the workload is large. When the environment changes obviously, the target detection effect is very different. Because of the strong feature expression ability of convolution neural network in deep learning, the extracted feature is very robust. In this paper, we mainly use the more classical detection framework based on depth learning-Faster R-CNN [5], and on this basis try to use different feature extraction layers, on the basis of the traditional classical model, the network structure is changed. So that the current network model to make a better balance between accuracy and speed. The calibration data is used to train the model and adjust the parameters. Finally, a model with good precision and speed is trained and applied to the detection system. The development environment of the target detection system in this paper is Linux. the QT graphical interface library which focuses on the image interface is used as the development framework of the interface, and C language is used in the bottom layer. The development process of the target detection system described in this paper mainly includes the whole requirement analysis, the overall design and implementation, and the test and so on. Finally, through the test, it is proved that the system has good performance in both hardware and performance.
【學(xué)位授予單位】:首都經(jīng)濟(jì)貿(mào)易大學(xué)
【學(xué)位級別】:碩士
【學(xué)位授予年份】:2017
【分類號】:TP391.41;TP18

【參考文獻(xiàn)】

相關(guān)碩士學(xué)位論文 前2條

1 李松澤;基于深度學(xué)習(xí)的車道線檢測系統(tǒng)的設(shè)計與實現(xiàn)[D];哈爾濱工業(yè)大學(xué);2016年

2 王斌;基于深度學(xué)習(xí)的行人檢測[D];北京交通大學(xué);2015年



本文編號:1915711

資料下載
論文發(fā)表

本文鏈接:http://sikaile.net/kejilunwen/zidonghuakongzhilunwen/1915711.html


Copyright(c)文論論文網(wǎng)All Rights Reserved | 網(wǎng)站地圖 |

版權(quán)申明:資料由用戶c19d8***提供,本站僅收錄摘要或目錄,作者需要刪除請E-mail郵箱bigeng88@qq.com
深夜视频成人在线观看| 欧美日韩国产精品第五页| 男人的天堂的视频东京热| 欧洲偷拍视频中文字幕| 亚洲日本中文字幕视频在线观看| 久久99亚洲小姐精品综合| 国产亚洲精品久久99| 国产精品一区欧美二区| 99久热只有精品视频最新| 国产精品久久男人的天堂| 日本东京热视频一区二区三区| 在线日本不卡一区二区| 神马午夜福利免费视频| 中文字幕不卡欧美在线| 国产精品久久女同磨豆腐| 欧美日韩国产免费看黄片| 少妇被粗大进猛进出处故事| 欧美成人高清在线播放| 国产欧美日韩精品一区二| 国产又爽又猛又粗又色对黄| 亚洲欧美日本成人在线| 麻豆视传媒短视频免费观看| 国产三级视频不卡在线观看| 中文字字幕在线中文乱码二区| 婷婷亚洲综合五月天麻豆 | 久久中文字幕中文字幕中文| 人妻一区二区三区在线| 欧美一级内射一色桃子| 国产不卡最新在线视频| 日本高清加勒比免费在线| 亚洲精品国产精品日韩| 暴力三级a特黄在线观看| 国产精品免费无遮挡不卡视频| 日韩1区二区三区麻豆| 亚洲精品蜜桃在线观看| 日系韩系还是欧美久久| 国内精品偷拍视频久久| 欧美一区日韩一区日韩一区| 国产毛片对白精品看片 | 国产成人综合亚洲欧美日韩| 国产超碰在线观看免费|