基于自主視覺的移動機器人運動規(guī)劃方法研究
[Abstract]:In recent years, the research of mobile robots in the world has developed rapidly and has been applied to unattended warehouses to assemble goods and other complex tasks. It is possible for mobile robot to complete the task of program setting in unmanned environment, which has been paid more and more attention. In human settlements, robots are required to act as human helpers, in addition to their own programs, to identify the people in the environment, on the one hand, to avoid the human body, and, on the other, to avoid the human body. Able to independently identify the work instructions issued by a person living in one room, and "according to plan". As the main form of interaction between computer and user, visual system provides the possibility to realize the above functions. In this paper, an autonomous vision system with recognizable limb commands from natural persons and a mobile robot with flexible omnidirectional motion are studied. Firstly, a robot platform system with four McNum wheels is established. Can complete forward, backward, left, right, 45 擄oblique azimuth to the left, zero radius clockwise gyration, zero radius in situ counterclockwise gyration. Second, Build the body surface vision system that can recognize the body language movement of the natural person, put forward the space vector algorithm to recognize the human body posture action, define the limb movement, achieve the purpose of recognizing the limb movement; third, The mobile robot control system based on SQL Server database server is built. The information command transmission between the upper and lower computers is carried out through the server, and the real-time command transfer control function is realized. Finally, The visual recognition efficiency and the overall control performance of the mobile platform are tested. The experimental results show that the system can efficiently recognize the language features of the human body and the manipulator can control the multi-directional omnidirectional motion of the mobile robot flexibly and quickly through its own actions. The overall effect of the mobile platform meets the requirements of the plan.
【學(xué)位授予單位】:北京建筑大學(xué)
【學(xué)位級別】:碩士
【學(xué)位授予年份】:2017
【分類號】:TP242
【參考文獻(xiàn)】
相關(guān)期刊論文 前10條
1 ;機器人再定義[J];高科技與產(chǎn)業(yè)化;2016年05期
2 李白薇;;變革中的機器人產(chǎn)業(yè)[J];中國科技獎勵;2015年12期
3 化銘;季寧;羅琴娟;;中國機器人的發(fā)展與成績[J];中國新通信;2015年13期
4 Xiao-Fei Ji;Qian-Qian Wu;Zhao-Jie Ju;Yang-Yang Wang;;Study of Human Action Recognition Based on Improved Spatio-temporal Features[J];International Journal of Automation & Computing;2014年05期
5 黃國范;李亞;;人體動作姿態(tài)識別綜述[J];電腦知識與技術(shù);2013年01期
6 Anonymous;;談“機器人三定律”[J];伺服控制;2012年01期
7 陳元杰;朱康武;葛耀崢;顧臨怡;;基于雙目視覺的水下定位系統(tǒng)[J];機電工程;2011年05期
8 吳恩生;朱敏琛;;一種融合局部與全局信息的距離約束角點匹配算法[J];計算機應(yīng)用;2010年01期
9 賈濤,陳濤,凌偉;基于外極線約束條件的視頻判讀方法[J];儀器儀表學(xué)報;2005年S1期
10 袁野,歐宗瑛,田中旭;應(yīng)用神經(jīng)網(wǎng)絡(luò)隱式視覺模型進(jìn)行立體視覺的三維重建[J];計算機輔助設(shè)計與圖形學(xué)學(xué)報;2003年03期
相關(guān)博士學(xué)位論文 前1條
1 沈志熙;基于視覺導(dǎo)航的智能車輛在城區(qū)復(fù)雜場景中的目標(biāo)檢測技術(shù)研究[D];重慶大學(xué);2008年
相關(guān)碩士學(xué)位論文 前10條
1 劉嬌;基于Kinect的骨骼追蹤及肢體動作識別研究[D];西安工業(yè)大學(xué);2016年
2 宋騰;面向家庭服務(wù)機器人的動態(tài)手勢識別方法研究[D];燕山大學(xué);2016年
3 柴琳;基于立體視覺的人體動作識別方法研究[D];中國海洋大學(xué);2015年
4 張曉寧;基于遺傳算法的點焊機器人路徑規(guī)劃方法[D];上海師范大學(xué);2015年
5 邊俊霞;基于Kinect的人體動作識別系統(tǒng)與實現(xiàn)[D];濟南大學(xué);2015年
6 寧龍霄;液壓驅(qū)動雙足機器人的穩(wěn)定性控制方法研究[D];山東大學(xué);2015年
7 董歷亞;基于kinect三維物體的位置檢測[D];天津科技大學(xué);2015年
8 韓文錫;基于深度圖像的人體骨骼追蹤的骨骼點矯正問題研究[D];青島大學(xué);2014年
9 儲星;智能車輛自主駕駛控制策略研究[D];湖南大學(xué);2014年
10 王松林;基于Kinect的手勢識別與機器人控制技術(shù)研究[D];北京交通大學(xué);2014年
,本文編號:2261670
本文鏈接:http://sikaile.net/kejilunwen/zidonghuakongzhilunwen/2261670.html