天堂国产午夜亚洲专区-少妇人妻综合久久蜜臀-国产成人户外露出视频在线-国产91传媒一区二区三区

當前位置:主頁 > 科技論文 > 軟件論文 >

基于計算機視覺和深度學習的自動駕駛方法研究

發(fā)布時間:2018-11-05 14:13
【摘要】:自動駕駛是指車輛通過傳感器感知周圍環(huán)境,在沒有人為干預的情況下,實時改變駕駛行為,完成駕駛?cè)蝿。自動駕駛可以減少交通事故的發(fā)生,提高道路交通資源的使用率,節(jié)約居民的出行成本,因此對自動駕駛技術(shù)的研究具有重要意義。基于計算機視覺的自動駕駛技術(shù)使用視覺傳感器的觀測圖像作為輸入,駕駛動作作為輸出,F(xiàn)有方法主要分為間接感知型(Mediated Perception)方法、直接感知型(Direct Perception)方法和端到端控制(End-to-End Control)方法。其中,間接感知型方法將自動駕駛?cè)蝿辗譃槟繕藱z測、目標跟蹤、場景語義分割、相機模型和標定、三維重建等子任務。直接感知型方法首先學習交通環(huán)境的關鍵指標,隨后由控制邏輯進行控制。端到端控制方法直接建立輸入到動作的映射,具有簡明的系統(tǒng)結(jié)構(gòu)。本文設計了一個基于端到端控制和深度學習的自動駕駛算法。算法將自動駕駛作為一個整體的問題進行研究,建立一個端到端的學習系統(tǒng)。學習系統(tǒng)是由7層卷積層和4層全連接層組成的卷積神經(jīng)網(wǎng)絡(CNN),網(wǎng)絡的輸入是無人車第一視角的圖像,輸出是一個浮點數(shù),代表要預測的轉(zhuǎn)向角。相對于預測左轉(zhuǎn)、右轉(zhuǎn)等動作的傳統(tǒng)方法,連續(xù)的轉(zhuǎn)向角對運動的描述更加精確。為了提升訓練效果,算法在訓練中使用了網(wǎng)絡預訓練和防止過擬合等措施。與間接感知型結(jié)構(gòu)和直接感知型結(jié)構(gòu)相比,本文設計的算法具有明顯的優(yōu)勢。首先,避免了間接映射型方法的復雜系統(tǒng)結(jié)構(gòu),降低了設計的難度。其次,可以在真實場景下高效的完成數(shù)據(jù)采集和訓練,而直接感知型方法中需要學習駕駛相關的指標,例如與障礙物的距離、與標志線距離等,在真實場景中精確采集這些數(shù)據(jù)需要超聲和激光雷達等設備,采集成本高,不易實現(xiàn)。本文算法只需要記錄視場圖像和轉(zhuǎn)向角作為卷積神經(jīng)網(wǎng)絡的訓練樣本,在測試時只需采集視場圖像,根據(jù)卷積神經(jīng)網(wǎng)絡預測的轉(zhuǎn)向角實現(xiàn)對智能車的連續(xù)控制。為了減少真車實驗的成本,本文設計實現(xiàn)了一個微縮智能車系統(tǒng)用于數(shù)據(jù)采集和算法驗證。智能車在自行設計的有標志線和障礙物的模擬交通環(huán)境中進行數(shù)據(jù)采集、訓練和測試。在對CNN的可視化中,發(fā)現(xiàn)CNN可以自行提取與決策有關的特征。實驗結(jié)果表明,智能車能夠提前規(guī)劃合理的路線進行避障,能夠在常規(guī)場景中保持較高的自動駕駛率。
[Abstract]:Autopilot means that the vehicle perceives the surrounding environment through the sensor, changes the driving behavior in real time and completes the driving task without artificial intervention. Autopilot can reduce the occurrence of traffic accidents, improve the utilization of road traffic resources and save residents' travel costs. Therefore, the study of autopilot technology is of great significance. The automatic driving technology based on computer vision uses the observation image of the vision sensor as the input and the driving action as the output. The existing methods are mainly divided into indirect perceptual (Mediated Perception) method, direct perceptual (Direct Perception) method and end to end control (End-to-End Control) method. The indirect sensing method divides the autonomous driving tasks into target detection, target tracking, scene semantic segmentation, camera model and calibration, 3D reconstruction and other sub-tasks. The direct perception approach first studies the key indicators of traffic environment, and then controls by control logic. The end-to-end control method directly establishes the mapping of input to action and has a concise system structure. A self-driving algorithm based on end-to-end control and depth learning is designed in this paper. The algorithm studies autopilot as a whole and establishes an end-to-end learning system. The learning system is composed of seven layers of convolution layer and four layers of full connection layer. The input of the (CNN), network is the image of the first angle of view of the unmanned vehicle, and the output is a floating point, representing the steering angle to be predicted. Compared with the traditional method of predicting the left turn and right turn, the continuous steering angle is more accurate to describe the motion. In order to improve the training effect, network pretraining and prevention of overfitting are used in the training. Compared with the indirect perceptual structure and the direct perceptual structure, the algorithm designed in this paper has obvious advantages. Firstly, the complex system structure of indirect mapping method is avoided and the difficulty of design is reduced. Secondly, the data acquisition and training can be accomplished efficiently in the real scene. In the direct perception method, we need to learn the relevant indicators of driving, such as distance from obstacles, distance from marking lines, etc. Accurate acquisition of these data in the real scene requires ultrasonic and lidar equipment, which is difficult to achieve because of its high cost. The algorithm only needs to record the field of view and the steering angle as the training sample of the convolution neural network. In the test, it only needs to collect the field of view image, and realize the continuous control of the intelligent vehicle according to the steering angle predicted by the convolution neural network. In order to reduce the cost of real car experiment, a miniature intelligent vehicle system is designed and implemented for data acquisition and algorithm verification. The intelligent vehicle carries on the data collection, the training and the test in the self-designed simulated traffic environment with signs and obstacles. In the visualization of CNN, it is found that CNN can extract the features related to decision by itself. The experimental results show that the intelligent vehicle can plan a reasonable route ahead of time to avoid obstacles, and can maintain a high autopilot rate in conventional scenarios.
【學位授予單位】:哈爾濱工業(yè)大學
【學位級別】:碩士
【學位授予年份】:2017
【分類號】:U463.6;TP391.41

【參考文獻】

相關期刊論文 前1條

1 孫振平,安向京,賀漢根;CITAVT-IV——視覺導航的自主車[J];機器人;2002年02期

,

本文編號:2312324

資料下載
論文發(fā)表

本文鏈接:http://sikaile.net/kejilunwen/ruanjiangongchenglunwen/2312324.html


Copyright(c)文論論文網(wǎng)All Rights Reserved | 網(wǎng)站地圖 |

版權(quán)申明:資料由用戶8b895***提供,本站僅收錄摘要或目錄,作者需要刪除請E-mail郵箱bigeng88@qq.com