基于支持向量機(jī)與概率輸出網(wǎng)絡(luò)的深度學(xué)習(xí)
發(fā)布時(shí)間:2019-04-10 19:06
【摘要】:深度學(xué)習(xí)是能夠自動(dòng)提取特征并且實(shí)現(xiàn)對(duì)無(wú)標(biāo)簽樣本學(xué)習(xí)的深層算法。盡管訓(xùn)練好的深層網(wǎng)絡(luò)能夠提供良好的性能,但是學(xué)習(xí)算法的超參數(shù)需要精確的配置以及人工確定。支持向量機(jī)本質(zhì)上是淺層結(jié)構(gòu),無(wú)法自動(dòng)提取表征數(shù)據(jù)的抽象特征。因此在保持支持向量機(jī)自身優(yōu)勢(shì)的同時(shí),研究具有自動(dòng)提取數(shù)據(jù)內(nèi)部結(jié)構(gòu)特征的能力具有重要的理論意義和實(shí)踐意義。針對(duì)分類問題,論文利用深度學(xué)習(xí)的深層結(jié)構(gòu)、支持向量機(jī)的結(jié)構(gòu)風(fēng)險(xiǎn)最小化以及概率輸出網(wǎng)絡(luò)中的條件概率估計(jì)等特點(diǎn),建立了多層支持向量機(jī)結(jié)構(gòu)。其中核參數(shù)的選擇域呈網(wǎng)格狀,通過求取正負(fù)兩種類別對(duì)應(yīng)β分布的累積概率分布和經(jīng)驗(yàn)累積概率分布的K-S統(tǒng)計(jì)求取一致性的P值乘積最大對(duì)應(yīng)的核參數(shù)作為支持向量機(jī)模型的核參數(shù)。對(duì)應(yīng)的輸出為模型提取的特征,作為下一層的輸入,直至模型達(dá)到結(jié)束條件為止。最后通過三個(gè)常用的分類數(shù)據(jù)集對(duì)所提模型進(jìn)行了實(shí)驗(yàn)驗(yàn)證和分析。針對(duì)回歸問題,論文利用深度學(xué)習(xí)的深層結(jié)構(gòu)、支持向量機(jī)的結(jié)構(gòu)風(fēng)險(xiǎn)最小化以及概率輸出網(wǎng)絡(luò)中的條件概率估計(jì)等特點(diǎn),建立了多層支持向量機(jī)結(jié)構(gòu)。其中核參數(shù)的選擇域呈網(wǎng)格狀,通過求取輸出對(duì)應(yīng)β分布的累積概率分布和經(jīng)驗(yàn)累積概率分布的K-S統(tǒng)計(jì)求取一致性的P值最大對(duì)應(yīng)的核參數(shù)作為支持向量機(jī)模型的核參數(shù)。對(duì)應(yīng)的輸出為模型提取的特征,作為下一層的輸入,直至模型達(dá)到結(jié)束條件為止。最后通過三個(gè)常用的回歸數(shù)據(jù)集對(duì)所提模型進(jìn)行了實(shí)驗(yàn)驗(yàn)證和分析。
[Abstract]:Deep learning is a deep algorithm which can automatically extract features and realize unlabeled sample learning. Although the trained deep network can provide good performance, the hyperparameters of the learning algorithm need precise configuration and manual determination. Support vector machine (SVM) is essentially a shallow structure and can not automatically extract the abstract features of the representation data. Therefore, while maintaining the advantages of SVM itself, it is of great theoretical and practical significance to study the ability of automatically extracting the internal structural features of data. Based on the deep structure of deep learning, structural risk minimization of support vector machines and conditional probability estimation in probability output networks, a multi-layer support vector machine structure is proposed in this paper. Where the selection domain of the nuclear parameters is grid-shaped, By calculating the cumulative probability distribution of the positive and negative classes corresponding to the 尾 distribution and the K S statistics of the empirical cumulative probability distribution, the kernel parameters corresponding to the maximum product of the consistent P value are obtained as the kernel parameters of the support vector machine model. The corresponding output is the feature extracted from the model, which is used as the input of the next layer until the end condition of the model is reached. Finally, three common classification data sets are used to verify and analyze the proposed model. Based on the deep structure of deep learning, structural risk minimization of support vector machines and conditional probability estimation in probability output networks, a multi-layer support vector machine structure is proposed in this paper. The kernel parameter selection domain is grid-shaped, and the kernel parameters corresponding to the consistent P value are obtained by calculating the cumulative probability distribution of the output 尾 distribution and the empirical cumulative probability distribution as the kernel parameters of the support vector machine model. The corresponding output is the feature extracted from the model, which is used as the input of the next layer until the end condition of the model is reached. Finally, three commonly used regression data sets are used to verify and analyze the proposed model.
【學(xué)位授予單位】:西安理工大學(xué)
【學(xué)位級(jí)別】:碩士
【學(xué)位授予年份】:2017
【分類號(hào)】:TP181
本文編號(hào):2456044
[Abstract]:Deep learning is a deep algorithm which can automatically extract features and realize unlabeled sample learning. Although the trained deep network can provide good performance, the hyperparameters of the learning algorithm need precise configuration and manual determination. Support vector machine (SVM) is essentially a shallow structure and can not automatically extract the abstract features of the representation data. Therefore, while maintaining the advantages of SVM itself, it is of great theoretical and practical significance to study the ability of automatically extracting the internal structural features of data. Based on the deep structure of deep learning, structural risk minimization of support vector machines and conditional probability estimation in probability output networks, a multi-layer support vector machine structure is proposed in this paper. Where the selection domain of the nuclear parameters is grid-shaped, By calculating the cumulative probability distribution of the positive and negative classes corresponding to the 尾 distribution and the K S statistics of the empirical cumulative probability distribution, the kernel parameters corresponding to the maximum product of the consistent P value are obtained as the kernel parameters of the support vector machine model. The corresponding output is the feature extracted from the model, which is used as the input of the next layer until the end condition of the model is reached. Finally, three common classification data sets are used to verify and analyze the proposed model. Based on the deep structure of deep learning, structural risk minimization of support vector machines and conditional probability estimation in probability output networks, a multi-layer support vector machine structure is proposed in this paper. The kernel parameter selection domain is grid-shaped, and the kernel parameters corresponding to the consistent P value are obtained by calculating the cumulative probability distribution of the output 尾 distribution and the empirical cumulative probability distribution as the kernel parameters of the support vector machine model. The corresponding output is the feature extracted from the model, which is used as the input of the next layer until the end condition of the model is reached. Finally, three commonly used regression data sets are used to verify and analyze the proposed model.
【學(xué)位授予單位】:西安理工大學(xué)
【學(xué)位級(jí)別】:碩士
【學(xué)位授予年份】:2017
【分類號(hào)】:TP181
【參考文獻(xiàn)】
相關(guān)期刊論文 前7條
1 段艷杰;呂宜生;張杰;趙學(xué)亮;王飛躍;;深度學(xué)習(xí)在控制領(lǐng)域的研究現(xiàn)狀與展望[J];自動(dòng)化學(xué)報(bào);2016年05期
2 張仕光;胡清華;謝宗霞;米據(jù)生;;基于Beta噪聲模型支持向量回歸及其應(yīng)用[J];南京大學(xué)學(xué)報(bào)(自然科學(xué)版);2013年04期
3 陳進(jìn)東;潘豐;;基于在線支持向量機(jī)和遺傳算法的預(yù)測(cè)控制[J];系統(tǒng)工程與電子技術(shù);2013年06期
4 崔和;龍玉峰;;支持向量機(jī)學(xué)習(xí)算法的研究現(xiàn)狀與展望[J];信息與電子工程;2008年05期
5 劉涵;劉丁;;基于支持向量機(jī)的參數(shù)自整定PID非線性系統(tǒng)控制[J];控制理論與應(yīng)用;2008年03期
6 郭小薈;馬小平;;基于Matlab的支持向量機(jī)工具箱[J];計(jì)算機(jī)應(yīng)用與軟件;2007年12期
7 張學(xué)工;關(guān)于統(tǒng)計(jì)學(xué)習(xí)理論與支持向量機(jī)[J];自動(dòng)化學(xué)報(bào);2000年01期
,本文編號(hào):2456044
本文鏈接:http://sikaile.net/kejilunwen/zidonghuakongzhilunwen/2456044.html
最近更新
教材專著