智能碼垛機器人關鍵技術的研究及開發(fā)
發(fā)布時間:2018-06-23 23:18
本文選題:智能碼垛機器人 + 雙目視覺 ; 參考:《河南科技大學》2015年碩士論文
【摘要】:當今時代,機器人的應用越來越普遍,尤其是隨著制造業(yè)與物流行業(yè)的發(fā)展,機器人碼垛技術受到越來越多人的關注。但是,國內碼垛機器人技術的研究和應用程度較低,智能化程度有限,相關技術不夠成熟。造成這些問題的原因一方面是由于國外機器人對我國機器人市場的沖擊,另一方面是因為國內相關技術研究力量薄弱。為提高國內機器人碼垛的智能化程度,本文對碼垛機器人的視覺系統(tǒng)、多傳感器信息融合等關鍵技術展開研究。分析國內外碼垛機器人的研究進展和應用狀況,總結國外碼垛機器人應用特點和國內碼垛機器人在研究和應用上存在的不足,提出本文研究的重點內容;根據實驗與研究的要求,選擇合適的攝像機、鏡頭和圖像采集卡等組成本文的雙目視覺系統(tǒng);分析視覺系統(tǒng)坐標系和攝像機成像模型,研究雙目視覺系統(tǒng)的測量原理以及雙目視覺系統(tǒng)的標定方法;分析雙目視覺測量數學模型,得出雙目視覺系統(tǒng)空間坐標獲取方法;為獲得較高測量精度,建立雙目視覺測量精度分析模型,以幾何學的方式對焦距和基線與光軸夾角等因素對測量精度的影響進行分析,得出減小誤差的方法;采用MATLAB標定工具箱和平面圓點靶標定的方法分別對雙目視覺系統(tǒng)進行標定,對比兩者的標定的精度,選擇兩者中精度較高的標定方法;為了更加精確的獲取碼垛對象測量關鍵點的坐標,提出邊緣點擬合直線求交點方法進行測量關鍵點定位,提高匹配點定位精度;對ABB IRB2400型機器人進行運動學正解、逆解和工作空間求解,并對其進行路徑規(guī)劃;根據本文實驗要求選擇合適的傳感器,構成多傳感器信息融合碼垛對象識別系統(tǒng);對比多傳感器信息融合方法的特點,采用BP神經網絡對視覺信息、力覺信息和熱覺信息進行融合的方法對碼垛對象進行識別與判斷。搭建碼垛機器人視覺系統(tǒng)與抓取實驗平臺,對視覺測量、軌跡規(guī)劃和多傳感器信息融合的理論方法進行實驗驗證。實驗結果表明雙目測量相對誤差為0.87%~2.51%,傳感器融合系統(tǒng)判斷準確率可達91.7%,驗證了本文方法的可行性。
[Abstract]:Nowadays, the application of robot is becoming more and more common, especially with the development of manufacturing and logistics, more and more people pay attention to the technology of robot palletizing. However, the research and application of palletizing robot technology in China is low, the intelligence degree is limited, and the related technology is not mature enough. The causes of these problems are due to the impact of foreign robots on the robot market in China on the one hand, and the weakness of research on related technologies in China on the other hand. In order to improve the intelligence of palletizing in China, the key technologies such as vision system of palletizing robot and multi-sensor information fusion are studied in this paper. This paper analyzes the research progress and application status of palletizing robot at home and abroad, summarizes the application characteristics of foreign palletizing robot and the shortcomings in research and application of domestic palletizing robot, and puts forward the key content of this paper. According to the requirements of experiment and research, the binocular vision system is composed of suitable camera, lens and image acquisition card, and the coordinate system of vision system and camera imaging model are analyzed. The measurement principle of binocular vision system and the calibration method of binocular vision system are studied, the mathematical model of binocular vision measurement is analyzed, and the method of obtaining the spatial coordinate of binocular vision system is obtained. The precision analysis model of binocular vision measurement is established, and the effect of focal length and angle between baseline and optical axis on measurement accuracy is analyzed by geometry, and the method of reducing error is obtained. The binocular vision system is calibrated by using MATLAB calibration toolbox and the method of plane dot target. The calibration methods with higher accuracy are selected by comparing the accuracy of the two calibration methods. In order to obtain the coordinates of the key points in the measurement of palletizing objects more accurately, the method of edge point fitting and straight line intersection is proposed to locate the key points in order to improve the accuracy of matching points, and the kinematics forward solution of ABB IRB2400 robot is presented. Inverse solution and workspace solution, and path planning; according to the experimental requirements of this paper, select the appropriate sensor, constitute multi-sensor information fusion palletizing object recognition system; compare the characteristics of multi-sensor information fusion method, BP neural network is used to identify and judge palletizing objects by fusion of visual information, force information and thermal information. The vision system and grab experiment platform of palletizing robot are built to verify the theory and method of vision measurement, trajectory planning and multi-sensor information fusion. The experimental results show that the relative error of binocular measurement is 0.87 and the accuracy of sensor fusion system can reach 91.7. The feasibility of this method is verified.
【學位授予單位】:河南科技大學
【學位級別】:碩士
【學位授予年份】:2015
【分類號】:TP391.41;TP242
【參考文獻】
相關期刊論文 前5條
1 高嵩;潘泉;肖秦琨;Chen Xiang;;多傳感器自適應濾波融合算法[J];電子與信息學報;2008年08期
2 顏國霖;顏江山;;邊緣檢測算子及其在結構光三維重構技術中的應用[J];佛山科學技術學院學報(自然科學版);2012年03期
3 葛建兵;翟雪琴;竇進強;李霞;;基于MATLAB的機器人運動學仿真[J];機械設計與制造;2008年09期
4 許輝;王振華;陳國棟;孫立寧;;基于距離精度的工業(yè)機器人標定模型[J];制造業(yè)自動化;2013年11期
5 張良安;梅江平;黃田;;基于Petri網絡的碼垛機器人生產線控制軟件設計方法[J];天津大學學報;2011年01期
相關碩士學位論文 前2條
1 謝云峰;基于虛擬樣機技術玻璃生產線用重載機器人的研究[D];哈爾濱工業(yè)大學;2010年
2 劉濤;層碼垛機器人結構設計及動態(tài)性能分析[D];蘭州理工大學;2010年
,本文編號:2058863
本文鏈接:http://sikaile.net/guanlilunwen/wuliuguanlilunwen/2058863.html
最近更新
教材專著