AER視覺傳感器系統(tǒng)后端事件特征提取方法設(shè)計(jì)
發(fā)布時(shí)間:2018-04-26 09:16
本文選題:AER視覺系統(tǒng) + 特征提取; 參考:《天津大學(xué)》2016年碩士論文
【摘要】:傳統(tǒng)的視覺傳感器以“幀掃描”為圖像采集方式。隨著視覺系統(tǒng)實(shí)際應(yīng)用對(duì)于速度等性能要求的提升,傳統(tǒng)視覺傳感器遇到了數(shù)據(jù)率過大、幀頻受限、動(dòng)態(tài)范圍低的發(fā)展瓶頸。因此,基于仿生視覺感知模型的地址-事件表達(dá)(Address Event Representation,AER)視覺傳感器以其速度高、延遲小、冗余低的優(yōu)勢(shì)成為當(dāng)前機(jī)器視覺系統(tǒng)領(lǐng)域的研究熱點(diǎn),該類傳感器僅對(duì)發(fā)生變化的像素觸發(fā)響應(yīng)、異步輸出稀疏表示的事件信息,從根本上消除了冗余信息的產(chǎn)生,特別適合定向高速物體拍攝及目標(biāo)識(shí)別等機(jī)器視覺系統(tǒng)。本文研究了三種基于AER視覺傳感器的特征提取算法,這些算法可以實(shí)時(shí)的從低冗余事件信息中提取出目標(biāo)的形狀特征和紋理特征,為進(jìn)一步的機(jī)器識(shí)別系統(tǒng)研究提供數(shù)據(jù)準(zhǔn)備。本文首先簡(jiǎn)要介紹了面陣AER視覺傳感器和線陣Timed-AER視覺傳感器的概念、工作原理以及基本結(jié)構(gòu),并指出該類視覺傳感器所存在的事件信息不易理解、無法繼承傳統(tǒng)圖像處理方法等系統(tǒng)缺陷。之后本文針對(duì)目標(biāo)形狀特征提取的需要,設(shè)計(jì)了基于AER事件對(duì)匹配的高速目標(biāo)二值化方法,通過對(duì)ON/OFF事件信息進(jìn)行去噪、細(xì)化、輪廓閉合等預(yù)處理獲得目標(biāo)輪廓的主體外形,再通過事件對(duì)匹配方法確定目標(biāo)區(qū)域,完成二值化操作,實(shí)現(xiàn)目標(biāo)與背景的分離。并設(shè)計(jì)基于等價(jià)標(biāo)號(hào)思想的高速二值連通域標(biāo)記方法,只對(duì)有限的事件點(diǎn)進(jìn)行標(biāo)記,避免了對(duì)全幀圖像的冗余遍歷,提高了標(biāo)記算法的效率,實(shí)現(xiàn)同一視場(chǎng)中不同目標(biāo)的標(biāo)記分割。最后本文設(shè)計(jì)提出了AER卷積處理算法,通過16種Gabor模板對(duì)事件信息進(jìn)行卷積,實(shí)現(xiàn)了事件信息不同方向、不同尺度下紋理特征的提取。通過對(duì)本文設(shè)計(jì)算法的實(shí)驗(yàn)分析和與傳統(tǒng)算法的對(duì)比,仿真結(jié)果表明,本文設(shè)計(jì)的基于AER事件的目標(biāo)二值化算法能夠應(yīng)對(duì)非均勻光照、低對(duì)比度等非理想環(huán)境條件,同時(shí)具有較高的算法效率,對(duì)于一幅512×512的圖像,平均運(yùn)行時(shí)間為2~4s;基于事件的二值連通域標(biāo)記算法速度可以達(dá)到傳統(tǒng)等價(jià)標(biāo)號(hào)算法的1.5~8倍;而本文設(shè)計(jì)的AER卷積處理算法也能有效的提取原始事件信息在不同方向和不同尺度下的紋理特征。綜上所述,本文提出的三種算法能夠高效的實(shí)現(xiàn)事件信息的特征提取,適用于高速AER視覺系統(tǒng)應(yīng)用領(lǐng)域。
[Abstract]:Traditional visual sensors take "frame scan" as image acquisition. With the improvement of speed and other performance requirements, traditional visual sensors have encountered the development bottleneck of too large data rate, limited frame frequency and low dynamic range. Therefore, the Address Event Repr based on the bionic visual perception model Esentation, AER) vision sensor has become the research hotspot in the field of machine vision system for its high speed, small delay and low redundancy. This kind of sensor only triggers response of the changing pixels, and the asynchronous output is sparse representation of event information, which fundamentally eliminates the generation of redundant information, especially suitable for directional high-speed object pat. Three kinds of feature extraction algorithms based on AER vision sensors are studied in this paper. These algorithms can extract the shape features and texture features of the target from the low redundancy event information in real time, and provide data preparation for the further research of the machine recognition system. Firstly, this paper briefly introduces the surface array A. The concept, working principle and basic structure of ER visual sensor and linear Timed-AER vision sensor, and point out that the event information of this kind of visual sensor is not easy to understand and can not inherit the defects of the traditional image processing method. After that, this paper designs the matching based on the AER event to the need of the target shape feature extraction. The high speed target two value method is used to obtain the main contour of the target contour by denoising, refining, contour closing and other preprocessing of ON/OFF event information. Then the target area is determined by the event matching method, and the two value operation is completed to separate the target from the background. The high-speed two value connected domain marking based on the equivalent label idea is set up. Method, only a limited event point is marked, which avoids the redundant traversal of the whole frame image, improves the efficiency of the labeling algorithm and realizes the segmentation of different targets in the same field of view. Finally, the AER convolution processing algorithm is designed and the event information is convoluted with 16 kinds of Gabor templates to realize the different direction of event information. The extraction of texture features under different scales. Through the experimental analysis of the design algorithm in this paper and the comparison with the traditional algorithm, the simulation results show that the target two value algorithm based on AER events designed in this paper can deal with non ideal environment conditions such as non-uniform illumination, low contrast, and also has high algorithm efficiency, for a 512 x 512. The average running time is 2~4s, and the two value connected domain labeling algorithm based on events can reach 1.5~8 times of the traditional equivalent labeling algorithm, and the AER convolution processing algorithm designed in this paper can also effectively extract the textured characteristics of the original event information in different directions and different scales. In summary, the three algorithms proposed in this paper are proposed. It can efficiently realize the feature extraction of event information and is suitable for the application field of high-speed AER vision system.
【學(xué)位授予單位】:天津大學(xué)
【學(xué)位級(jí)別】:碩士
【學(xué)位授予年份】:2016
【分類號(hào)】:TP391.41;TP212
【參考文獻(xiàn)】
相關(guān)期刊論文 前3條
1 王洪濤;羅長(zhǎng)洲;王渝;郭賀;趙述芳;;New Algorithm for Binary Connected-Component Labeling Based on Run-Length Encoding and Union-Find Sets[J];Journal of Beijing Institute of Technology;2010年01期
2 劉勇;尹立新;趙洋;;一種新的二值圖像自適應(yīng)跳塊編碼[J];計(jì)算機(jī)工程;2009年13期
3 徐利華,陳早生;二值圖像中的游程編碼區(qū)域標(biāo)記[J];光電工程;2004年06期
,本文編號(hào):1805394
本文鏈接:http://sikaile.net/kejilunwen/ruanjiangongchenglunwen/1805394.html
最近更新
教材專著