天堂国产午夜亚洲专区-少妇人妻综合久久蜜臀-国产成人户外露出视频在线-国产91传媒一区二区三区

當(dāng)前位置:主頁 > 科技論文 > 自動化論文 >

基于圖神經(jīng)網(wǎng)絡(luò)的姿態(tài)時空特征提取與匹配

發(fā)布時間:2024-12-19 04:28
  隨著技術(shù)的發(fā)展,視頻和照片數(shù)據(jù)快速增加。對這些圖片和視頻中的特征進(jìn)行分析有助于我們了解人類的行為,具有重要的理論和實踐意義,因此我們需要分析姿勢和姿勢序列的特征。在這篇論文中,我們研究了如何使用圖神經(jīng)網(wǎng)絡(luò)從2D人體姿態(tài)中提取特征,匹配相應(yīng)的目標(biāo)。先前的方法已經(jīng)探索了如何利用圖卷積來從2D姿勢回歸相應(yīng)的3D姿態(tài)以及識別相應(yīng)的動作等,但是它們假定了鄰接矩陣中人類骨骼的自然拓?fù)浣Y(jié)構(gòu),這使得圖卷積的接收域受限.另外,這些方法僅利用2D姿勢的位置信息,無法克服信息不足導(dǎo)致的深度歧義問題。同時,室外3D姿態(tài)標(biāo)注是極其困難的,這大大限制了現(xiàn)有姿態(tài)提取與匹配模型的在室外的泛化性。為了解決這些問題,我們主要提出了兩點改進(jìn),一是提出了自適應(yīng)語義圖卷積算子,在學(xué)習(xí)人體骨架自然連接的強度的同時學(xué)習(xí)不直接相連關(guān)節(jié)點之間的聯(lián)系。二是我們提出利用序數(shù)深度信息,也就是關(guān)節(jié)點相對于其父節(jié)點是否離攝像頭更近的信息,來構(gòu)建骨架圖。一方面,這有助于減少2D姿態(tài)固有的深度歧義,另一方面,這有助于克服野外環(huán)境難以獲得3D關(guān)節(jié)點標(biāo)注的難題。我們在三個不同的姿態(tài)時空特征提取與匹配任務(wù)上進(jìn)行了實驗:從單張2D骨架回歸相應(yīng)的3D姿態(tài)、從2...

【文章頁數(shù)】:92 頁

【學(xué)位級別】:碩士

【文章目錄】:
摘要
Abstract
Chapter 1 Introduction
    1.1 Pose Feature Extraction and Matching
    1.2 Graph Neural Network
    1.3 Challenges
    1.4 Contribution
    1.5 Thesis Outline
Chapter 2 Literature Review
    2.1 Three Dimensional Human Pose Estimation from a Single Monocular Image
    2.2 Three Dimensional Human Pose Estimation from a Monocular Video
    2.3 Skeleton-based Action Recognition
Chapter 3 Three Dimensional Human Pose Estimation from a Monocular Image
    3.1 Semantic graph convolution networks
        3.1.1 Graph Convolution
        3.1.2 Semantic Graph Convolution
        3.1.3 Non-Local Layer
        3.1.4 Semantic Graph Convolutional Networks for 3D Human Pose Regression
    3.2 Adaptive Semantic Graph Convolution
    3.3 Utilizing Ordinal Depth Information
    3.4 Adaptive Semantic Graph Convolution Networks for 3D Human Pose Estimation
    3.5 Experiments
        3.5.1 Datasets and Protocols
        3.5.2 Implementation Details
        3.5.3 Experimental Results
        3.5.4 Ablation Experiments
        3.5.5 Visualization of the Adaptive Adjacent Matrix
        3.5.6 Qualitative Results
    3.6 Summary
Chapter 4 Three Dimensional Human Pose Estimation from a Monocular Video
    4.1 Introduction
    4.2 Problem Definition
    4.3 Spatial Temporal Graph Convolution
    4.4 Adaptive Semantic Graph Convolution and Using Ordinal Depth Information
        4.4.1 Utilizing Ordinal Depth Information
        4.4.2 Adaptive Semantic Graph Convolution for Spatial-Temporal Pose Graph
    4.5 Experiments
        4.5.1 Implementation details
        4.5.2 Experimental Results
        4.5.3 Ablation Study
    4.6 Summary
Chapter 5 Action Recognition Based on 2D Skeleton with Ordinal Depth
    5.1 Introduction
    5.2 Method
        5.2.1 3D Human Pose Estimation
        5.2.2 Spatial-Temporal Graph Convolutional Network for Action Recognition
    5.3 Experiments
        5.3.1 Dataset and Evaluation Protocol
        5.3.2 Implementation Details
        5.3.3 Experimental Results
    5.4 Summary
Conclusions
結(jié)論
References
致謝



本文編號:4017657

資料下載
論文發(fā)表

本文鏈接:http://sikaile.net/kejilunwen/zidonghuakongzhilunwen/4017657.html


Copyright(c)文論論文網(wǎng)All Rights Reserved | 網(wǎng)站地圖 |

版權(quán)申明:資料由用戶cfd0b***提供,本站僅收錄摘要或目錄,作者需要刪除請E-mail郵箱bigeng88@qq.com