基于深度學(xué)習(xí)的無人機(jī)人機(jī)交互系統(tǒng)
發(fā)布時(shí)間:2018-11-13 07:59
【摘要】:現(xiàn)行的無人機(jī)控制(UAV)主要依靠專業(yè)的設(shè)備,由經(jīng)過專業(yè)訓(xùn)練的人來完成.為了更方便的人機(jī)交互,本文提出了一種基于雙目視覺和深度學(xué)習(xí)的手勢(shì)控制無人機(jī)(HRI)方法.用雙目視覺提取深度圖,跟蹤提取人物所在區(qū)域并且設(shè)置閾值將人物與背景分離開來,從而得到只含有人物的深度圖.其次,通過對(duì)深度圖序列的處理并疊加,將視頻轉(zhuǎn)換為同時(shí)含有時(shí)間與空間信息的彩色紋理圖.本文用深度學(xué)習(xí)工具Caffe對(duì)所得到的彩色紋理圖進(jìn)行了訓(xùn)練與識(shí)別,根據(jù)識(shí)別結(jié)果生成無人機(jī)的控制指令.本文所述方法在室內(nèi)和室外均可使用,有效范圍達(dá)到10,m,可以簡化無人機(jī)控制復(fù)雜度,對(duì)促進(jìn)無人機(jī)普及及拓展無人機(jī)應(yīng)用范圍都具有重要意義.
[Abstract]:The current UAV control (UAV) mainly relies on professional equipment and is accomplished by trained personnel. In order to facilitate human-computer interaction, a hand gesture control (HRI) method based on binocular vision and depth learning is proposed in this paper. The depth map is extracted by binocular vision, the region in which the character is located is tracked and the threshold is set to separate the character from the background, thus the depth map with only characters is obtained. Secondly, by processing and superposing the depth map sequence, the video is transformed into a color texture image with both temporal and spatial information. In this paper, the color texture images are trained and recognized by the depth learning tool Caffe, and the control instructions of UAV are generated according to the recognition results. The proposed method can be used both indoors and outdoors, with an effective range of 10 m, which can simplify the control complexity of UAV, and is of great significance to promote the popularization of UAV and expand its application range.
【作者單位】: 天津大學(xué)電氣自動(dòng)化與信息工程學(xué)院;天津市先進(jìn)電氣工程與能源技術(shù)重點(diǎn)實(shí)驗(yàn)室;天津工業(yè)大學(xué)電氣工程與自動(dòng)化學(xué)院;
【基金】:國家自然科學(xué)基金資助項(xiàng)目(61571325) 天津市科技支撐計(jì)劃重點(diǎn)資助項(xiàng)目(15ZCZDGX00190,16ZXHLGX00190)~~
【分類號(hào)】:TP391.41;V279
本文編號(hào):2328494
[Abstract]:The current UAV control (UAV) mainly relies on professional equipment and is accomplished by trained personnel. In order to facilitate human-computer interaction, a hand gesture control (HRI) method based on binocular vision and depth learning is proposed in this paper. The depth map is extracted by binocular vision, the region in which the character is located is tracked and the threshold is set to separate the character from the background, thus the depth map with only characters is obtained. Secondly, by processing and superposing the depth map sequence, the video is transformed into a color texture image with both temporal and spatial information. In this paper, the color texture images are trained and recognized by the depth learning tool Caffe, and the control instructions of UAV are generated according to the recognition results. The proposed method can be used both indoors and outdoors, with an effective range of 10 m, which can simplify the control complexity of UAV, and is of great significance to promote the popularization of UAV and expand its application range.
【作者單位】: 天津大學(xué)電氣自動(dòng)化與信息工程學(xué)院;天津市先進(jìn)電氣工程與能源技術(shù)重點(diǎn)實(shí)驗(yàn)室;天津工業(yè)大學(xué)電氣工程與自動(dòng)化學(xué)院;
【基金】:國家自然科學(xué)基金資助項(xiàng)目(61571325) 天津市科技支撐計(jì)劃重點(diǎn)資助項(xiàng)目(15ZCZDGX00190,16ZXHLGX00190)~~
【分類號(hào)】:TP391.41;V279
【相似文獻(xiàn)】
相關(guān)期刊論文 前2條
1 修吉宏;李軍;黃浦;;航測相機(jī)人機(jī)交互系統(tǒng)的設(shè)計(jì)與實(shí)現(xiàn)[J];液晶與顯示;2011年04期
2 張麗萍;謝栓勤;;飛機(jī)供電系統(tǒng)中人機(jī)交互的研究與設(shè)計(jì)[J];計(jì)算機(jī)測量與控制;2006年03期
,本文編號(hào):2328494
本文鏈接:http://sikaile.net/kejilunwen/ruanjiangongchenglunwen/2328494.html
最近更新
教材專著