基于超限學(xué)習(xí)機(jī)的改進(jìn)算法研究
發(fā)布時(shí)間:2018-04-15 01:10
本文選題:神經(jīng)網(wǎng)絡(luò) + 超限學(xué)習(xí)機(jī); 參考:《杭州電子科技大學(xué)》2017年碩士論文
【摘要】:近年來,基于神經(jīng)網(wǎng)絡(luò)的智能算法由于其在深度學(xué)習(xí)、智能數(shù)據(jù)處理及大數(shù)據(jù)等領(lǐng)域的應(yīng)用而被廣泛的研究。其中,超限學(xué)習(xí)機(jī)與稀疏表示混合分類算法(ELMSRC)在數(shù)據(jù)識別方面比超限學(xué)習(xí)機(jī)更具有優(yōu)勢,在模型訓(xùn)練時(shí)間方面比稀疏表示分類算法更具有優(yōu)勢。然而,僅依賴于超限學(xué)習(xí)機(jī)的輸出向量中最大值與次大值之差做分類器篩選在某些應(yīng)用中具有一定的不可靠性。針對這個(gè)問題,本論文提出了競爭機(jī)制的超限學(xué)習(xí)機(jī)與稀疏表示分類算法(En-SRC),即在數(shù)據(jù)分類階段采用競爭機(jī)制的超限學(xué)習(xí)機(jī)(VELM)代替超限學(xué)習(xí)機(jī)(ELM)。另外,基于差異度自調(diào)節(jié)算法在優(yōu)化權(quán)重向量w階段采用競爭機(jī)制的超限學(xué)習(xí)機(jī),提出了基于差異度自調(diào)節(jié)的超限學(xué)習(xí)機(jī)算法(SPLD-ELM);采用正則化競爭機(jī)制的超限學(xué)習(xí)機(jī),提出了基于差異度自調(diào)節(jié)的正則化超限學(xué)習(xí)機(jī)算法(SP-RELM)。本論文的主要貢獻(xiàn)如下:(1)提出了基于競爭機(jī)制超限學(xué)習(xí)機(jī)與稀疏表示混合算法(En-SRC)。ELMSRC算法在分類器篩選階段采用了超限學(xué)習(xí)機(jī)算法,而超限學(xué)習(xí)機(jī)算法在處理某些噪聲較高的樣本時(shí)會面臨較差的結(jié)果,意味著分類器篩選階段的可靠性有待提高。因此在分類器篩選階段,采用競爭機(jī)制的超限學(xué)習(xí)機(jī)代替超限學(xué)習(xí)機(jī)進(jìn)行分類器篩選,進(jìn)而有效提高分類器篩選的準(zhǔn)確性。經(jīng)實(shí)驗(yàn)驗(yàn)證表明:與ELM算法相比,En-SRC算法的識別率提高2%到7%左右;與ELMSRC算法相比,En-SRC算法在取得相同甚至更高識別率的同時(shí),所需測試時(shí)間更短。(2)提出了基于差異度自調(diào)節(jié)的超限學(xué)習(xí)機(jī)算法(SPLD-ELM);诓町惗茸哉{(diào)節(jié)算法在模型訓(xùn)練過程中優(yōu)化權(quán)重向量w階段采用了傳統(tǒng)的迭代學(xué)習(xí)方法,而迭代學(xué)習(xí)方法存在模型訓(xùn)練時(shí)間較長的缺點(diǎn)。為了解決這個(gè)問題,本論文提出了SPLD-ELM算法,即在優(yōu)化權(quán)重向量w階段采用超限學(xué)習(xí)機(jī)算法代替?zhèn)鹘y(tǒng)的迭代方法。經(jīng)實(shí)驗(yàn)驗(yàn)證表明:與ELM相比,SPLD-ELM算法的計(jì)算復(fù)雜度有所增加,但識別率得到了有效的提高。
[Abstract]:In recent years, neural network-based intelligent algorithms have been widely studied because of their applications in the fields of deep learning, intelligent data processing and big data.Among them, the hybrid classification algorithm of out-of-limits learning machine and sparse representation algorithm (ELMSRC) has more advantages in data recognition than out-of-limit learning machine, and in model training time, it has more advantages than sparse representation classification algorithm.However, only relying on the difference between the maximum and the second in the output vector of the learning machine has some unreliability in some applications.In order to solve this problem, this paper proposes a new algorithm, En-SRCU, which is used to replace the over-limited learning machine (ELMU) in the stage of data classification by using the over-limited learning machine and the sparse representation classification algorithm (En-SRCU) in the process of data classification.In addition, based on the difference degree self-regulation algorithm, the over-limit learning machine with competition mechanism is adopted in the stage of optimizing the weight vector w, and the algorithm of the over-limit learning machine based on the difference degree self-regulation is put forward, and the regularized competition mechanism is adopted in the over-limited learning machine.In this paper, a regularization algorithm based on differential self-regulation is proposed.The main contributions of this paper are as follows: (1) A hybrid algorithm, En-SRCU. ELMSRC, based on competition mechanism and sparse representation, is proposed.However, the over-limit learning machine algorithm will face poor results when dealing with some samples with high noise, which means that the reliability of the classifier selection stage needs to be improved.Therefore, in the stage of classifier screening, the competition mechanism is used to replace the out-of-limit learning machine for classifier screening, and the accuracy of classifier screening is improved effectively.The experimental results show that the recognition rate of En-SRC algorithm is about 2% to 7% higher than that of ELM algorithm, and the recognition rate of En-SRC algorithm is the same or higher than that of ELMSRC algorithm.In this paper, a new algorithm, SPLD-ELMU, based on self-regulation of difference degree, is proposed.The traditional iterative learning method is used to optimize the weight vector w stage in the process of model training based on the self-adjusting algorithm of difference degree, but the iterative learning method has the disadvantage of long training time of the model.In order to solve this problem, SPLD-ELM algorithm is proposed in this paper, that is, in the stage of optimizing the weight vector w, the over-limit learning machine algorithm is used to replace the traditional iterative method.Experimental results show that the computational complexity of the SPLD-ELM algorithm is higher than that of the ELM algorithm, but the recognition rate is improved effectively.
【學(xué)位授予單位】:杭州電子科技大學(xué)
【學(xué)位級別】:碩士
【學(xué)位授予年份】:2017
【分類號】:TP181
【參考文獻(xiàn)】
相關(guān)期刊論文 前2條
1 李樹濤;魏丹;;壓縮傳感綜述[J];自動化學(xué)報(bào);2009年11期
2 朱大奇;人工神經(jīng)網(wǎng)絡(luò)研究現(xiàn)狀及其展望[J];江南大學(xué)學(xué)報(bào);2004年01期
,本文編號:1751834
本文鏈接:http://sikaile.net/kejilunwen/zidonghuakongzhilunwen/1751834.html
最近更新
教材專著