基于序列深度學(xué)習(xí)的視頻分析:建模表達(dá)與應(yīng)用
[Abstract]:In recent years, video data has been explosively growing. Such a large number of video data in the storage, identification, sharing, editing, generation and other processes need accurate video semantic analysis technology. At present, video semantic analysis based on depth learning can be divided into two steps: 1) extracting the visual feature expression of each frame by convolution neural network; 2) learning the feature sequence by using long-short term recurrent neural network (LSTM) and tabulating it. On the basis of a comprehensive survey and summary of existing video semantic analysis techniques, the classical problems in video semantic classification and video semantic description task depth learning models are fully studied. A continuous Dropout algorithm is proposed, which is a convolution neural network whose parameters are robust to image transformation and a convolution neural network whose structure is robust to image transformation. To solve this problem, an unsupervised layer-by-layer greedy learning approach is proposed to improve the model performance and training efficiency. Furthermore, in view of the limitations of the existing one-way mapping framework from video sequences to word sequences, a novel multi-way sequence learning algorithm based on latent semantic representation is proposed creatively. The main work and innovations of this paper are summarized as follows: Continuous Dropout Dropout has been proved to be an effective algorithm for training deep convolutional neural networks. Its main idea is that by shielding some atoms in a large-scale convolutional neural network, it can train more than one atom at a time. Enlightened by this phenomenon, we extend the traditional binary Dropout to continuous Dropout. On the one hand, continuous Dropout is closer to the activation of neurons in the human brain than traditional binary Dropout. On the other hand, we show that continuous Dropout has the property of avoiding the common adaptation of feature detectors. Results. The convolution neural network (CNN) with robust parameters has achieved the best results in many visual tasks. At present, almost all visual information is processed by convolution neural network. However, the current CNN model still shows poor robustness in image spatial transformation. The layered and parametric convolution neural networks based on the combination of convolution (matrix multiplication and nonlinear activation) and pool operation should be able to learn robust mapping from transformed input images to transformed invariant representations. On the contrary, each convolution kernel will learn invariant features in a variety of combinations of transformations for its input feature graph. Thus, we do not need to add any additional supervisory information to the optimization process and training image or modify the input image. CNN learning by machine transformation is more insensitive to the transformation of the input image. In small-scale image recognition, large-scale image recognition and image retrieval, the performance of the existing convolution neural network is improved. The robust convolution neural network convolution neural network (CNN) has shown the best performance in many visual recognition tasks. However, the combination of convolution and pooling operations shows little invariance to the local location of meaningful targets in the input. Sometimes, some networks use data augmentation to train the network to encode this invariance into network parameters, but this limits the ability of the model to learn the target content. In order to make the model concentrate on learning the object it describes, and not be affected by its position, we propose sorting the local blocks in the feature response graph, and then input them in. In the next layer, when block reordering combines convolution and pool operations, we obtain a consistent representation of targets in input images at different locations. We demonstrate that the proposed block reordering module can improve the performance of CNN for many benchmark tasks, including MNIST digital recognition, large-scale image recognition and image retrieval. Recent developments in sequential deep recurrent neural networks learning recurrent neural networks (RNNs), especially the long-and short-term memory networks (LSTMs) commonly used in video analysis, have shown their potential for modeling sequential data, especially in the areas of computer vision and natural language processing. Compared with the shallow network, the effect is not improved and the convergence speed is slow. This difficulty arises from the LSTM initialization method, in which the gradient-based optimization usually converges to the worse local solution. In this paper, we propose a novel encoder-decoder-based learning framework to initialize multi-layer LSTM in a greedy layer-by-layer training manner, in which each new LSTM layer is trained to retain the main information from the upper layer. Practicing multi-layer LSTM outperforms randomly initialized LSTM in terms of regression (additive problem), handwritten numeral recognition (MNIST), video classification (UCF-101) and machine translation (WMT'14). In addition, using greedy layer-by-layer training method, the convergence speed of multi-layer LSTM is increased by four times. Sequence-to-sequence learning sequence learning is a popular area of in-depth learning, such as video caption and speech recognition. Existing methods model the learning process by first encoding the input sequence into a fixed-size vector and then decoding the target sequence from the vector. Although simple and intuitive, this mapping model is task-dependent. In this paper, we propose a star-like framework for generic and flexible sequence-to-sequence learning in which different types of media content (peripheral nodes) can be encoded into shared latent representations (SLRs), or central nodes. The media-invariant properties of SLR can be viewed as high-level regularization of intermediate vectors, forcing it not only to capture implicit representations within each single medium, such as automatic encoders, but also to transform as a mapping model. In addition, the SLR model is content-specific. Our SLR model was validated on YouTube2Text and MSR-VTT datasets to achieve significant results for video-to-statement tasks. Upgrade, and first achieve sentence to video results.
【學(xué)位授予單位】:中國科學(xué)技術(shù)大學(xué)
【學(xué)位級別】:博士
【學(xué)位授予年份】:2017
【分類號】:TP391.41
【相似文獻(xiàn)】
相關(guān)期刊論文 前10條
1 楊曉帥 ,付玫;神經(jīng)網(wǎng)絡(luò)技術(shù)讓管理更輕松[J];軟件世界;2000年11期
2 云中客;新的神經(jīng)網(wǎng)絡(luò)來自于仿生學(xué)[J];物理;2001年10期
3 唐春明,高協(xié)平;進(jìn)化神經(jīng)網(wǎng)絡(luò)的研究進(jìn)展[J];系統(tǒng)工程與電子技術(shù);2001年10期
4 李智;一種基于神經(jīng)網(wǎng)絡(luò)的煤炭調(diào)運(yùn)優(yōu)化方法[J];長沙鐵道學(xué)院學(xué)報(bào);2003年02期
5 程科,王士同,楊靜宇;新型模糊形態(tài)神經(jīng)網(wǎng)絡(luò)及其應(yīng)用研究[J];計(jì)算機(jī)工程與應(yīng)用;2004年21期
6 王凡,孟立凡;關(guān)于使用神經(jīng)網(wǎng)絡(luò)推定操作者疲勞的研究[J];人類工效學(xué);2004年03期
7 周麗暉;從統(tǒng)計(jì)角度看神經(jīng)網(wǎng)絡(luò)[J];統(tǒng)計(jì)教育;2005年06期
8 趙奇 ,劉開第 ,龐彥軍;灰色補(bǔ)償神經(jīng)網(wǎng)絡(luò)及其應(yīng)用研究[J];微計(jì)算機(jī)信息;2005年14期
9 袁婷;;神經(jīng)網(wǎng)絡(luò)在股票市場預(yù)測中的應(yīng)用[J];軟件導(dǎo)刊;2006年05期
10 尚晉;楊有;;從神經(jīng)網(wǎng)絡(luò)的過去談科學(xué)發(fā)展觀[J];重慶三峽學(xué)院學(xué)報(bào);2006年03期
相關(guān)會(huì)議論文 前10條
1 徐春玉;;基于泛集的神經(jīng)網(wǎng)絡(luò)的混沌性[A];1996中國控制與決策學(xué)術(shù)年會(huì)論文集[C];1996年
2 周樹德;王巖;孫增圻;孫富春;;量子神經(jīng)網(wǎng)絡(luò)[A];2003年中國智能自動(dòng)化會(huì)議論文集(上冊)[C];2003年
3 羅山;張琳;范文新;;基于神經(jīng)網(wǎng)絡(luò)和簡單規(guī)劃的識別融合算法[A];2009系統(tǒng)仿真技術(shù)及其應(yīng)用學(xué)術(shù)會(huì)議論文集[C];2009年
4 郭愛克;馬盡文;丁康;;序言(二)[A];1999年中國神經(jīng)網(wǎng)絡(luò)與信號處理學(xué)術(shù)會(huì)議論文集[C];1999年
5 鐘義信;;知識論:神經(jīng)網(wǎng)絡(luò)的新機(jī)遇——紀(jì)念中國神經(jīng)網(wǎng)絡(luò)10周年[A];1999年中國神經(jīng)網(wǎng)絡(luò)與信號處理學(xué)術(shù)會(huì)議論文集[C];1999年
6 許進(jìn);保錚;;神經(jīng)網(wǎng)絡(luò)與圖論[A];1999年中國神經(jīng)網(wǎng)絡(luò)與信號處理學(xué)術(shù)會(huì)議論文集[C];1999年
7 金龍;朱詩武;趙成志;陳寧;;數(shù)值預(yù)報(bào)產(chǎn)品的神經(jīng)網(wǎng)絡(luò)釋用預(yù)報(bào)應(yīng)用[A];1999年中國神經(jīng)網(wǎng)絡(luò)與信號處理學(xué)術(shù)會(huì)議論文集[C];1999年
8 田金亭;;神經(jīng)網(wǎng)絡(luò)在中學(xué)生創(chuàng)造力評估中的應(yīng)用[A];第十二屆全國心理學(xué)學(xué)術(shù)大會(huì)論文摘要集[C];2009年
9 唐墨;王科俊;;自發(fā)展神經(jīng)網(wǎng)絡(luò)的混沌特性研究[A];2009年中國智能自動(dòng)化會(huì)議論文集(第七分冊)[南京理工大學(xué)學(xué)報(bào)(增刊)][C];2009年
10 張廣遠(yuǎn);萬強(qiáng);曹海源;田方濤;;基于遺傳算法優(yōu)化神經(jīng)網(wǎng)絡(luò)的故障診斷方法研究[A];第十二屆全國設(shè)備故障診斷學(xué)術(shù)會(huì)議論文集[C];2010年
相關(guān)重要報(bào)紙文章 前10條
1 美國明尼蘇達(dá)大學(xué)社會(huì)學(xué)博士 密西西比州立大學(xué)國家戰(zhàn)略規(guī)劃與分析研究中心資深助理研究員 陳心想;維護(hù)好創(chuàng)新的“神經(jīng)網(wǎng)絡(luò)硬件”[N];中國教師報(bào);2014年
2 盧業(yè)忠;腦控電腦 驚世駭俗[N];計(jì)算機(jī)世界;2001年
3 葛一鳴 路邊文;人工神經(jīng)網(wǎng)絡(luò)將大顯身手[N];中國紡織報(bào);2003年
4 中國科技大學(xué)計(jì)算機(jī)系 邢方亮;神經(jīng)網(wǎng)絡(luò)挑戰(zhàn)人類大腦[N];計(jì)算機(jī)世界;2003年
5 記者 孫剛;“神經(jīng)網(wǎng)絡(luò)”:打開復(fù)雜工藝“黑箱”[N];解放日報(bào);2007年
6 本報(bào)記者 劉霞;美用DNA制造出首個(gè)人造神經(jīng)網(wǎng)絡(luò)[N];科技日報(bào);2011年
7 健康時(shí)報(bào)特約記者 張獻(xiàn)懷;干細(xì)胞移植:修復(fù)受損的神經(jīng)網(wǎng)絡(luò)[N];健康時(shí)報(bào);2006年
8 劉力;我半導(dǎo)體神經(jīng)網(wǎng)絡(luò)技術(shù)及應(yīng)用研究達(dá)國際先進(jìn)水平[N];中國電子報(bào);2001年
9 ;神經(jīng)網(wǎng)絡(luò)和模糊邏輯[N];世界金屬導(dǎo)報(bào);2002年
10 鄒麗梅 陳耀群;江蘇科大神經(jīng)網(wǎng)絡(luò)應(yīng)用研究通過鑒定[N];中國船舶報(bào);2006年
相關(guān)博士學(xué)位論文 前10條
1 楊旭華;神經(jīng)網(wǎng)絡(luò)及其在控制中的應(yīng)用研究[D];浙江大學(xué);2004年
2 李素芳;基于神經(jīng)網(wǎng)絡(luò)的無線通信算法研究[D];山東大學(xué);2015年
3 石艷超;憶阻神經(jīng)網(wǎng)絡(luò)的混沌性及幾類時(shí)滯神經(jīng)網(wǎng)絡(luò)的同步研究[D];電子科技大學(xué);2014年
4 王新迎;基于隨機(jī)映射神經(jīng)網(wǎng)絡(luò)的多元時(shí)間序列預(yù)測方法研究[D];大連理工大學(xué);2015年
5 付愛民;極速學(xué)習(xí)機(jī)的訓(xùn)練殘差、穩(wěn)定性及泛化能力研究[D];中國農(nóng)業(yè)大學(xué);2015年
6 李輝;基于粒計(jì)算的神經(jīng)網(wǎng)絡(luò)及集成方法研究[D];中國礦業(yè)大學(xué);2015年
7 王衛(wèi)蘋;復(fù)雜網(wǎng)絡(luò)幾類同步控制策略研究及穩(wěn)定性分析[D];北京郵電大學(xué);2015年
8 張海軍;基于云計(jì)算的神經(jīng)網(wǎng)絡(luò)并行實(shí)現(xiàn)及其學(xué)習(xí)方法研究[D];華南理工大學(xué);2015年
9 李艷晴;風(fēng)速時(shí)間序列預(yù)測算法研究[D];北京科技大學(xué);2016年
10 陳輝;多維超精密定位系統(tǒng)建模與控制關(guān)鍵技術(shù)研究[D];東南大學(xué);2015年
相關(guān)碩士學(xué)位論文 前10條
1 章穎;混合不確定性模塊化神經(jīng)網(wǎng)絡(luò)與高校效益預(yù)測的研究[D];華南理工大學(xué);2015年
2 賈文靜;基于改進(jìn)型神經(jīng)網(wǎng)絡(luò)的風(fēng)力發(fā)電系統(tǒng)預(yù)測及控制研究[D];燕山大學(xué);2015年
3 李慧芳;基于憶阻器的渦卷混沌系統(tǒng)及其電路仿真[D];西南大學(xué);2015年
4 陳彥至;神經(jīng)網(wǎng)絡(luò)降維算法研究與應(yīng)用[D];華南理工大學(xué);2015年
5 董哲康;基于憶阻器的組合電路及神經(jīng)網(wǎng)絡(luò)研究[D];西南大學(xué);2015年
6 武創(chuàng)舉;基于神經(jīng)網(wǎng)絡(luò)的遙感圖像分類研究[D];昆明理工大學(xué);2015年
7 李志杰;基于神經(jīng)網(wǎng)絡(luò)的上證指數(shù)預(yù)測研究[D];華南理工大學(xué);2015年
8 陳少吉;基于神經(jīng)網(wǎng)絡(luò)血壓預(yù)測研究與系統(tǒng)實(shí)現(xiàn)[D];華南理工大學(xué);2015年
9 張韜;幾類時(shí)滯神經(jīng)網(wǎng)絡(luò)穩(wěn)定性分析[D];渤海大學(xué);2015年
10 邵雪瑩;幾類時(shí)滯不確定神經(jīng)網(wǎng)絡(luò)的穩(wěn)定性分析[D];渤海大學(xué);2015年
,本文編號:2217109
本文鏈接:http://sikaile.net/shoufeilunwen/xxkjbs/2217109.html