面向三維環(huán)境的類腦同步定位與制圖系統(tǒng)
發(fā)布時間:2021-10-21 23:58
隨著陸、海、空及太空機器人應(yīng)用領(lǐng)域的不斷擴展,機器人系統(tǒng)的自主性、魯棒性、智能性等也將面臨新的巨大挑戰(zhàn),如何讓機器人實現(xiàn)真正自主已成為該領(lǐng)域研究的前沿?zé)狳c。而三維同步定位與制圖(SLAM)作為實現(xiàn)機器人自主的關(guān)鍵技術(shù)之一,在實際應(yīng)用中仍面臨許多挑戰(zhàn)。特別是在未知的復(fù)雜三維環(huán)境下,機器人板載傳感器、計算、存儲、功耗、重量、尺寸等資源嚴格受限,傳感器信號易受干擾且無法依賴GPS等外源性信號,導(dǎo)致現(xiàn)有三維SLAM技術(shù)因存在設(shè)備能耗大、計算成本高、環(huán)境適應(yīng)差、智能水平低等突出問題,已成為制約移動機器人應(yīng)用的瓶頸。如何研制具有極低能耗、極高效率、極強魯棒等特性的新型智能三維SLAM新技術(shù)已成為亟待攻克的重大難題。然而,自然界中的人類和動物卻具有非凡的三維導(dǎo)航能力。例如,蝙蝠僅僅通過眼睛、耳朵等感官和小小的大腦,無需獲取高精度的地圖,就能自如的在復(fù)雜三維動態(tài)環(huán)境中進行智能導(dǎo)航,而且只需要消耗極少的能量,卻具有極高的效率和魯棒性。那么大腦是怎樣進行智能三維導(dǎo)航的呢?近年來,神經(jīng)科學(xué)家們逐漸發(fā)現(xiàn)了大腦中的“三維地圖”和“三維羅盤”,由三維位置細胞、三維頭朝向細胞、三維網(wǎng)格細胞等組成,逐步揭開了大腦三維...
【文章來源】:中國地質(zhì)大學(xué)湖北省 211工程院校 教育部直屬院校
【文章頁數(shù)】:221 頁
【學(xué)位級別】:博士
【文章目錄】:
作者簡歷
摘要
Abstract
List of Abbreviations
Chapter 1 Introduction
1.1 Motivation
1.2 Research Problems
1.3 Research Contents
1.4 Contributions
1.5 Dissertation Organization
Chapter 2 Literature Review
2.1 Conception of 3D SLAM and Navigation
2.1.1 SLAM for 3D Environments
2.1.2 Navigation for 3D Environments
2.1.3 Navigation Modes
2.1.4 Classification of 3D SLAM and Navigation Approaches
2.2 Conventional 3D SLAM
2.3 Neural Basis of 3D Navigation
2.3.1 Neural Representation of 3D Space
2.3.2 Computational Model of 3D Spatial Cells
2.4 Bio-inspired SLAM
2.4.1 Behaviour Strategy-inspired SLAM
2.4.2 Brain-inspired SLAM
2.5 Summary
Chapter 3 System Overview
3.1 The Framework and Components
3.1.1 3D Pose Representation
3.1.2 3D Path Integration
3.1.3 3D Spatial Experience Map
3.1.4 3D Vision Perception
3.2 The Software Architecture
3.3 The Working Process
3.4 Summary
Chapter 4 3D Pose Representation based on 3D Place and HeadDirection Cells
4.1 Requirements of Robot's Pose Representation in 3D Space
4.2 Biological Properties of3D Place Cells and 3D Head Direction Cells
4.3 3D Place Cell Model
4.3.1 Conceptual Model of 3D Place Cells
4.3.2 Computational Model of 3D Place Cells
4.4 3D Head Direction Cell Model
4.4.1 Multilayered HD Cell Model for4Do F Pose Representation
4.4.2 Torus 3D HD Cell Model for 5Do F Pose Representation
4.4.3 Conjunctive 3D HD Cell Models for 6Do F Pose Representation
4.5 Experiments
4.5.1 Experiments of 4Do F Pose Representation
4.5.2 Experiments of 5Do F Pose Representation
4.5.3 Experiments of 6Do F Pose Representation
4.6 Summary
Chapter 5 3D Path Integration based on3D Grid Cells
5.1 Requirements of Robot's Path Integration in 3D Space
5.2 Biological Properties of 3D Grid Cells
5.3 Conceptual Models of 3D Grid Cells
5.3.1 Cube3D GC Model
5.3.2 Conjunctive Cube 3D GC Model
5.3.3 Conjunctive Lie Group 3D GC Model
5.4 Computational Model of 3D Grid Cells
5.4.1 3D Path Integration
5.4.2 3D Pose Calibration
5.5 Experiments
5.5.1 Experiments of 3D Path Integration
5.5.2 Experiments of 3D Pose Calibration
5.6 Summary
Chapter 6 3D Spatial Experience Map
6.1 Requirements of Robot's 3D Spatial Experience Representation
6.2 4Do F Pose Experience Map
6.2.1 Encoding of 4Do F Pose Experiences
6.2.2 Creation of 4Do F Pose Experiences
6.2.3 Update of 4Do F Pose Experience Map
6.3 5Do F Pose Experience Map
6.3.1 Encoding of 5Do F Pose Experiences
6.3.2 Creation of 5Do F Pose Experiences
6.3.3 Update of 5Do F Pose Experience Map
6.4 6Do F Pose Experience Map
6.4.1 Encoding of 6Do F Pose Experiences
6.4.2 Creation of 6Do F Pose Experiences
6.4.3 Update of 6Do F Pose Experience Map
6.5 Experiments
6.5.1 Experiments of 4Do F Pose Experience Mapping
6.5.2 Experiments of 5Do F Pose Experience Mapping
6.5.3 Experiments of 6Do F Pose Experience Mapping
6.6 Summary
Chapter 7 3D Vision Perception
7.1 Overview of 3D Vision Perception
7.2 Image Processing
7.3 3D Visual Odometry for Estimating Self-motion Cues
7.3.1 4Do F Pose Estimation
7.3.2 5Do F and 6Do F Pose Estimation
7.4 Visual Template for Estimating External Cues
7.4.1 Overview of Local View Processing
7.4.2 Visual Template Learning and Recall
7.4.3 Local View Cell Calculation
7.5 Experiments
7.5.1 Experiments of Image Processing
7.5.2 Experiments of Self-motion Cues Estimation
7.5.3 Experiments of Visual Template Learning
7.6 Summary
Chapter 8 Performance Evaluation
8.1 Experimental Setup
8.1.1 Datasets
8.1.2 Parameters
8.2 Evaluation Metrics
8.2.1 Geometric Accuracy
8.2.2 Topological Consistency
8.3 Results
8.3.1 3D Spatial Experience Mapping
8.3.2 Snapshots of the3D Navigational Spatial Cells
8.3.3 Visual Template Learning and Recall
8.3.4 3D Visual Odometry
8.4 Comparison with State-of-the-art3D SLAM
8.4.1 Comparison with ORB-SLAM and LDSO
8.4.2 Performance Test by Integrating Visual Inertial Odometry
8.5 Summary
Chapter 9 Conclusion
9.1 Discussion and Summary
9.2 Future Work
Acknowledgements
致謝
References
Appendix
A.Code and Datasets
B.Videos of Experiments
C.Parameters
【參考文獻】:
期刊論文
[1]無人作戰(zhàn)平臺認知導(dǎo)航及其類腦實現(xiàn)思想[J]. 吳德偉,何晶,韓昆,李卉. 空軍工程大學(xué)學(xué)報(自然科學(xué)版). 2018(06)
本文編號:3449973
【文章來源】:中國地質(zhì)大學(xué)湖北省 211工程院校 教育部直屬院校
【文章頁數(shù)】:221 頁
【學(xué)位級別】:博士
【文章目錄】:
作者簡歷
摘要
Abstract
List of Abbreviations
Chapter 1 Introduction
1.1 Motivation
1.2 Research Problems
1.3 Research Contents
1.4 Contributions
1.5 Dissertation Organization
Chapter 2 Literature Review
2.1 Conception of 3D SLAM and Navigation
2.1.1 SLAM for 3D Environments
2.1.2 Navigation for 3D Environments
2.1.3 Navigation Modes
2.1.4 Classification of 3D SLAM and Navigation Approaches
2.2 Conventional 3D SLAM
2.3 Neural Basis of 3D Navigation
2.3.1 Neural Representation of 3D Space
2.3.2 Computational Model of 3D Spatial Cells
2.4 Bio-inspired SLAM
2.4.1 Behaviour Strategy-inspired SLAM
2.4.2 Brain-inspired SLAM
2.5 Summary
Chapter 3 System Overview
3.1 The Framework and Components
3.1.1 3D Pose Representation
3.1.2 3D Path Integration
3.1.3 3D Spatial Experience Map
3.1.4 3D Vision Perception
3.2 The Software Architecture
3.3 The Working Process
3.4 Summary
Chapter 4 3D Pose Representation based on 3D Place and HeadDirection Cells
4.1 Requirements of Robot's Pose Representation in 3D Space
4.2 Biological Properties of3D Place Cells and 3D Head Direction Cells
4.3 3D Place Cell Model
4.3.1 Conceptual Model of 3D Place Cells
4.3.2 Computational Model of 3D Place Cells
4.4 3D Head Direction Cell Model
4.4.1 Multilayered HD Cell Model for4Do F Pose Representation
4.4.2 Torus 3D HD Cell Model for 5Do F Pose Representation
4.4.3 Conjunctive 3D HD Cell Models for 6Do F Pose Representation
4.5 Experiments
4.5.1 Experiments of 4Do F Pose Representation
4.5.2 Experiments of 5Do F Pose Representation
4.5.3 Experiments of 6Do F Pose Representation
4.6 Summary
Chapter 5 3D Path Integration based on3D Grid Cells
5.1 Requirements of Robot's Path Integration in 3D Space
5.2 Biological Properties of 3D Grid Cells
5.3 Conceptual Models of 3D Grid Cells
5.3.1 Cube3D GC Model
5.3.2 Conjunctive Cube 3D GC Model
5.3.3 Conjunctive Lie Group 3D GC Model
5.4 Computational Model of 3D Grid Cells
5.4.1 3D Path Integration
5.4.2 3D Pose Calibration
5.5 Experiments
5.5.1 Experiments of 3D Path Integration
5.5.2 Experiments of 3D Pose Calibration
5.6 Summary
Chapter 6 3D Spatial Experience Map
6.1 Requirements of Robot's 3D Spatial Experience Representation
6.2 4Do F Pose Experience Map
6.2.1 Encoding of 4Do F Pose Experiences
6.2.2 Creation of 4Do F Pose Experiences
6.2.3 Update of 4Do F Pose Experience Map
6.3 5Do F Pose Experience Map
6.3.1 Encoding of 5Do F Pose Experiences
6.3.2 Creation of 5Do F Pose Experiences
6.3.3 Update of 5Do F Pose Experience Map
6.4 6Do F Pose Experience Map
6.4.1 Encoding of 6Do F Pose Experiences
6.4.2 Creation of 6Do F Pose Experiences
6.4.3 Update of 6Do F Pose Experience Map
6.5 Experiments
6.5.1 Experiments of 4Do F Pose Experience Mapping
6.5.2 Experiments of 5Do F Pose Experience Mapping
6.5.3 Experiments of 6Do F Pose Experience Mapping
6.6 Summary
Chapter 7 3D Vision Perception
7.1 Overview of 3D Vision Perception
7.2 Image Processing
7.3 3D Visual Odometry for Estimating Self-motion Cues
7.3.1 4Do F Pose Estimation
7.3.2 5Do F and 6Do F Pose Estimation
7.4 Visual Template for Estimating External Cues
7.4.1 Overview of Local View Processing
7.4.2 Visual Template Learning and Recall
7.4.3 Local View Cell Calculation
7.5 Experiments
7.5.1 Experiments of Image Processing
7.5.2 Experiments of Self-motion Cues Estimation
7.5.3 Experiments of Visual Template Learning
7.6 Summary
Chapter 8 Performance Evaluation
8.1 Experimental Setup
8.1.1 Datasets
8.1.2 Parameters
8.2 Evaluation Metrics
8.2.1 Geometric Accuracy
8.2.2 Topological Consistency
8.3 Results
8.3.1 3D Spatial Experience Mapping
8.3.2 Snapshots of the3D Navigational Spatial Cells
8.3.3 Visual Template Learning and Recall
8.3.4 3D Visual Odometry
8.4 Comparison with State-of-the-art3D SLAM
8.4.1 Comparison with ORB-SLAM and LDSO
8.4.2 Performance Test by Integrating Visual Inertial Odometry
8.5 Summary
Chapter 9 Conclusion
9.1 Discussion and Summary
9.2 Future Work
Acknowledgements
致謝
References
Appendix
A.Code and Datasets
B.Videos of Experiments
C.Parameters
【參考文獻】:
期刊論文
[1]無人作戰(zhàn)平臺認知導(dǎo)航及其類腦實現(xiàn)思想[J]. 吳德偉,何晶,韓昆,李卉. 空軍工程大學(xué)學(xué)報(自然科學(xué)版). 2018(06)
本文編號:3449973
本文鏈接:http://sikaile.net/shoufeilunwen/xxkjbs/3449973.html
最近更新
教材專著