工業(yè)機器人智能平面及空間視覺伺服控制算法研究
發(fā)布時間:2023-01-12 17:36
多年來,機器人感知是一個重要的研究領域,其中計算機視覺在機器人的環(huán)境感知方面發(fā)揮著重要作用;谝曈X反饋的機器人控制稱為視覺伺服控制。為了提高視覺伺服控制的技術水平,使其能夠在工業(yè)場景中穩(wěn)定地應用,有關學者進行了各種各樣的嘗試。然而至今為止,在實際復雜的工業(yè)環(huán)境中對視覺伺服進行深入研究的案例很少。本文基于大量實驗,介紹了計算機視覺和視覺伺服算法在需求廣泛的工業(yè)場景中的應用。首先,本文研究了基于位置的視覺伺服的汽車零件噴涂機器人的設計方法。本文采用了點云處理的方法,使其能夠?qū)崟r感知各種類型的幾何圖形。結(jié)合降維的方法,將三維點云投影到二維平面子空間上,并運用輪廓搜索等二維圖像處理的方法。本文提出了一種改進的特征提取方法,該方法使用扇形模型從汽車零件的二維圖像中提取有用的特征。本文采用統(tǒng)計匹配的方法從有限集合中識別出汽車零件,并與目前點云處理中廣泛使用的目標識別方法進行計算成本比較。比較結(jié)果表明,相比其他目標識別方法中的特征提取方法,該算法具有更好的實時性。在姿態(tài)估計部分,本文采用了著名的迭代最近點算法,將其與遺傳算法相結(jié)合,解決了其收斂于局部最優(yōu)的問題。與傳統(tǒng)的迭代最近點算法相比,改進后的...
【文章頁數(shù)】:183 頁
【學位級別】:博士
【文章目錄】:
摘要
Abstract
Chapter 1 Introduction to Visual Servoing
1.1 Computer Vision in Industry
1.2 Notation
1.3 Problem Formulation
1.3.1 Camera Modeling
1.3.2 Visual Servo Modeling
1.4 Image based Visual Servo
1.5 Position based Visual Servo
1.6 Direct Visual Servo
1.7 Research Targets and Contributions
1.8 Organization of the Manuscript
1.8.1 Part I: Improving the Traditional state of the art for Industrial Applications
1.8.2 Part II: Advancing Current state of the art with Developments in Artifi-cial Intelligence
1.9 Bibliographical Remarks
Chapter 2 Recognition and Pose Estimation of Auto Parts for an AutonomousSpray Painting Robot
2.1 Introduction
2.1.1 Object Recognition
2.1.2 Pose Estimation
2.1.3 Contribution Summary
2.2 Overview of Car Part Recognition
2.2.1 Segmentation of Point Cloud
2.2.2 Principal Component Analysis
2.2.3 Contour Searching
2.2.4 Cross Correlation
2.3 Feature Selection
2.4 Object Recognition Pipeline
2.5 Point Set Registration Pipeline
2.6 Experimental Results
2.6.1 Time Complexity Analysis
2.6.2 Comparative Analysis for Pose Estimation
2.7 Conclusion
Chapter 3 Quality Inspection of Remote Radio Unit (RRU) power port usingIBVS
3.1 Introduction
3.2 Image Features
3.2.1 Center Points
3.2.2 Area
3.2.3 Angle
3.3 IBVS Control Design
3.4 Results
3.4.1 Simulation
3.4.2 Experiment
3.5 Conclusion
Chapter 4 Quality Inspection of Remote Radio Units using Depth-free Imagebased Visual Servo with Acceleration Command
4.1 Introduction
4.1.1 Detection of Power Port
4.1.2 IBVS Control Design
4.1.3 Contribution Summary
4.2 Computer Vision Pipeline
4.2.1 Extraction of Region of Interest
4.2.2 Region of Interest Tracking using Camshift
4.2.3 Improved Camshift Tracking
4.3 Active Auto-Focus
4.4 Features Selection
4.5 Derivation of Depth Free Image Jacobian
4.6 IBVS Control Design
4.6.1 Problem Formulation
4.6.2 Acceleration Command
4.6.3 Stability Analysis
4.7 Results
4.7.1 Experimental Validation of Computer Vision Pipeline
4.7.2 Simulation Results
4.7.3 Experimental Validation
4.8 Conclusion
Chapter 5 Position based Visual Servoing in Joint Space with Deep NeuralNetworks
5.1 Introduction
5.1.1 Monocular Pose Estimation
5.1.2 PBVS in Joint Space
5.1.3 Contribution Summary
5.2 Position based Visual Servoing using Deep CNN
5.2.1 Theoretical Preliminaries
5.2.2 Deep CNN in Joint Space
5.3 Preparation of Dataset
5.3.1 Homography and Virtual Camera Setup
5.3.2 Incorporating Robot Kinematics
5.3.3 Brightness and Contrast Variation
5.3.4 Introduction of Occlusions
5.4 Control Design
5.5 Experimental Validation
5.5.1 Training the Networks
5.5.2 PBVS Control
5.6 Conclusion
Chapter 6 Deep Siamese Convolutional Neural Networks for Scene Identification
6.1 Introduction
6.2 Related Work
6.3 Preliminaries
6.3.1 Reinforcement Learning in Visual Servo
6.3.2 Sparse vs Dense Rewards Modeling
6.3.3 Siamese Networks for Scene Detection
6.4 Preparation of Data set
6.4.1 Affine Transformation of Images
6.4.2 Brightness Variation
6.4.3 Occlusion Handling
6.5 Network Architecture
6.6 Results
6.6.1 Generation of Data set
6.6.2 Network Training
6.6.3 Network Performance Analysis
6.6.4 Comparative Analysis with Naive Keypoints Matching
6.7 Conclusion
結(jié)論
Conclusions
References
攻讀博士學位期間發(fā)表的論文及其他成果
致謝
個人簡歷
本文編號:3730214
【文章頁數(shù)】:183 頁
【學位級別】:博士
【文章目錄】:
摘要
Abstract
Chapter 1 Introduction to Visual Servoing
1.1 Computer Vision in Industry
1.2 Notation
1.3 Problem Formulation
1.3.1 Camera Modeling
1.3.2 Visual Servo Modeling
1.4 Image based Visual Servo
1.5 Position based Visual Servo
1.6 Direct Visual Servo
1.7 Research Targets and Contributions
1.8 Organization of the Manuscript
1.8.1 Part I: Improving the Traditional state of the art for Industrial Applications
1.8.2 Part II: Advancing Current state of the art with Developments in Artifi-cial Intelligence
1.9 Bibliographical Remarks
Chapter 2 Recognition and Pose Estimation of Auto Parts for an AutonomousSpray Painting Robot
2.1 Introduction
2.1.1 Object Recognition
2.1.2 Pose Estimation
2.1.3 Contribution Summary
2.2 Overview of Car Part Recognition
2.2.1 Segmentation of Point Cloud
2.2.2 Principal Component Analysis
2.2.3 Contour Searching
2.2.4 Cross Correlation
2.3 Feature Selection
2.4 Object Recognition Pipeline
2.5 Point Set Registration Pipeline
2.6 Experimental Results
2.6.1 Time Complexity Analysis
2.6.2 Comparative Analysis for Pose Estimation
2.7 Conclusion
Chapter 3 Quality Inspection of Remote Radio Unit (RRU) power port usingIBVS
3.1 Introduction
3.2 Image Features
3.2.1 Center Points
3.2.2 Area
3.2.3 Angle
3.3 IBVS Control Design
3.4 Results
3.4.1 Simulation
3.4.2 Experiment
3.5 Conclusion
Chapter 4 Quality Inspection of Remote Radio Units using Depth-free Imagebased Visual Servo with Acceleration Command
4.1 Introduction
4.1.1 Detection of Power Port
4.1.2 IBVS Control Design
4.1.3 Contribution Summary
4.2 Computer Vision Pipeline
4.2.1 Extraction of Region of Interest
4.2.2 Region of Interest Tracking using Camshift
4.2.3 Improved Camshift Tracking
4.3 Active Auto-Focus
4.4 Features Selection
4.5 Derivation of Depth Free Image Jacobian
4.6 IBVS Control Design
4.6.1 Problem Formulation
4.6.2 Acceleration Command
4.6.3 Stability Analysis
4.7 Results
4.7.1 Experimental Validation of Computer Vision Pipeline
4.7.2 Simulation Results
4.7.3 Experimental Validation
4.8 Conclusion
Chapter 5 Position based Visual Servoing in Joint Space with Deep NeuralNetworks
5.1 Introduction
5.1.1 Monocular Pose Estimation
5.1.2 PBVS in Joint Space
5.1.3 Contribution Summary
5.2 Position based Visual Servoing using Deep CNN
5.2.1 Theoretical Preliminaries
5.2.2 Deep CNN in Joint Space
5.3 Preparation of Dataset
5.3.1 Homography and Virtual Camera Setup
5.3.2 Incorporating Robot Kinematics
5.3.3 Brightness and Contrast Variation
5.3.4 Introduction of Occlusions
5.4 Control Design
5.5 Experimental Validation
5.5.1 Training the Networks
5.5.2 PBVS Control
5.6 Conclusion
Chapter 6 Deep Siamese Convolutional Neural Networks for Scene Identification
6.1 Introduction
6.2 Related Work
6.3 Preliminaries
6.3.1 Reinforcement Learning in Visual Servo
6.3.2 Sparse vs Dense Rewards Modeling
6.3.3 Siamese Networks for Scene Detection
6.4 Preparation of Data set
6.4.1 Affine Transformation of Images
6.4.2 Brightness Variation
6.4.3 Occlusion Handling
6.5 Network Architecture
6.6 Results
6.6.1 Generation of Data set
6.6.2 Network Training
6.6.3 Network Performance Analysis
6.6.4 Comparative Analysis with Naive Keypoints Matching
6.7 Conclusion
結(jié)論
Conclusions
References
攻讀博士學位期間發(fā)表的論文及其他成果
致謝
個人簡歷
本文編號:3730214
本文鏈接:http://sikaile.net/kejilunwen/shengwushengchang/3730214.html
最近更新
教材專著