具有更好透明性和解釋性的智能建模方法研究
本文選題:智能建模 + 模糊系統(tǒng); 參考:《江南大學(xué)》2017年碩士論文
【摘要】:在人臉識別、語音識別等復(fù)雜場景中,目前以神經(jīng)網(wǎng)絡(luò)為代表的智能模型已能達(dá)到很高的識別精度。而在智能醫(yī)療診斷等特定領(lǐng)域,對智能建模方法的透明性和解釋性有著更高的要求,可解釋性好的模型有助于人們發(fā)現(xiàn)事物的內(nèi)在規(guī)律。一般來說,傳統(tǒng)的統(tǒng)計學(xué)習(xí)方法本身是簡單的,因而易于理解和解釋,而那些智能模型就像是黑盒子一樣,它們的透明性較差,很難解釋清楚模型內(nèi)部的推理過程;谀:(guī)則推導(dǎo)的過程,語義性更強(qiáng),使得模糊系統(tǒng)在解釋性層面表現(xiàn)得較好。早期的模糊系統(tǒng)復(fù)雜度較低,只需要少量的模糊規(guī)則來構(gòu)成規(guī)則庫,并且領(lǐng)域?qū)<铱蓞⑴c到規(guī)則的制定過程中,因而所構(gòu)建的模糊系統(tǒng)還是比較透明的。然而,在模糊系統(tǒng)向神經(jīng)網(wǎng)絡(luò)融合的趨勢當(dāng)中,模糊規(guī)則和系統(tǒng)結(jié)構(gòu)的復(fù)雜化就導(dǎo)致可解釋性出現(xiàn)了損失。為得到透明性和解釋性更好的智能模型,本文展開了下面的研究:1)比較和分析神經(jīng)網(wǎng)絡(luò)、模糊系統(tǒng)等人工智能模型的可解釋性,例如神經(jīng)元個數(shù)、模糊規(guī)則條數(shù)對可解釋性有哪些影響;比較和分析不同分類策略對所建分類模型可解釋性的影響,如“一對一”或“一對多”策略各有哪些優(yōu)缺點(diǎn)。2)基于最小最大概率決策技術(shù),結(jié)合神經(jīng)網(wǎng)絡(luò)、模糊系統(tǒng)、核技巧,得到可解釋性更好的廣義隱映射最小最大概率機(jī),指出學(xué)習(xí)所得α指標(biāo)的物理解釋意義。通過簡單的實(shí)驗,檢驗各智能模型在分類問題中在解釋性層面上的不同表現(xiàn)。3)針對癲癇腦電信號的識別,基于最小最大概率決策技術(shù),將單隱層徑向基神經(jīng)網(wǎng)絡(luò)同分類樹進(jìn)行連接,充分考慮和利用了兩類數(shù)據(jù)間不同的可分性,得到可解釋性更好的徑向基最小最大概率分類樹,其推理過程清晰,易于理解和解釋。4)基于區(qū)間二型TSK模糊系統(tǒng),利用子空間聚類和網(wǎng)格劃分法生成稀疏的規(guī)整的規(guī)則中心,構(gòu)建語義更為簡潔而清晰的規(guī)則前件,簡化規(guī)則后件為0階形式,降低了復(fù)雜度,從而得到可解釋性更好的區(qū)間二型模糊子空間0階TSK系統(tǒng)。在大量醫(yī)學(xué)數(shù)據(jù)上進(jìn)行實(shí)驗,驗證所提方法的有效性和優(yōu)勢。
[Abstract]:In the complex scene of face recognition and speech recognition, the intelligent model represented by neural network can achieve high recognition accuracy. However, in specific fields such as intelligent medical diagnosis, there is a higher requirement for transparency and interpretation of intelligent modeling methods. A good interpretable model can help people to discover the inherent laws of things. Generally speaking, the traditional statistical learning method is simple, so it is easy to understand and explain, but those intelligent models are like black boxes, they are less transparent, it is difficult to explain the reasoning process within the model. The process of fuzzy rule derivation is more semantic, which makes the fuzzy system perform well at the interpretive level. The complexity of the early fuzzy system is low, only a few fuzzy rules are needed to form the rule base, and the domain experts can participate in the process of making the rules, so the fuzzy system constructed is relatively transparent. However, in the trend of fusion of fuzzy system to neural network, the complexity of fuzzy rules and system structure leads to loss of interpretability. In order to obtain a more transparent and interpretive intelligent model, the following research is carried out to compare and analyze the interpretability of artificial intelligence models such as neural networks, fuzzy systems, and so on, such as the number of neurons. What is the effect of the number of fuzzy rules on interpretability, and the influence of different classification strategies on the interpretability of the proposed classification model is compared and analyzed. For example, what are the advantages and disadvantages of "one-to-one" or "one-to-many" strategies? based on the minimum maximum probability decision technology, combining neural networks, fuzzy systems and kernel techniques, a better interpretable generalized implicit mapping minimum maximum probability machine is obtained. The physical interpretation significance of the 偽 -index obtained from the study is pointed out. Through a simple experiment, we test the different performance of each intelligent model in the classification problem on the explanatory level. 3) for the identification of epileptic EEG signal, based on the minimum maximum probability decision technology, The single hidden layer radial basis function neural network is connected with the classification tree, and the different separability between the two kinds of data is fully considered and utilized, and a better interpretable minimum maximum probability classification tree is obtained, and the reasoning process is clear. Based on interval type 2 TSK fuzzy system, subspace clustering and mesh division method are used to generate sparse regular rule centers. The complexity is reduced, and a better interpretable interval type 2 fuzzy subspace 0 order TSK system is obtained. Experiments were carried out on a large amount of medical data to verify the effectiveness and advantages of the proposed method.
【學(xué)位授予單位】:江南大學(xué)
【學(xué)位級別】:碩士
【學(xué)位授予年份】:2017
【分類號】:R318;TP183
【參考文獻(xiàn)】
相關(guān)期刊論文 前7條
1 鄧趙紅;張江濱;蔣亦樟;王士同;;基于模糊子空間聚類的0階嶺回歸TSK模糊系統(tǒng)[J];控制與決策;2016年05期
2 陳聰;王士同;;基于模糊分組和監(jiān)督聚類的RBF回歸性能改進(jìn)[J];電子與信息學(xué)報;2009年05期
3 連可;陳世杰;周建明;龍兵;王厚軍;;基于遺傳算法的SVM多分類決策樹優(yōu)化算法研究[J];控制與決策;2009年01期
4 閻嶺,鄭洪濤,蔣靜坪;基于進(jìn)化策略生成可解釋性模糊系統(tǒng)[J];電子學(xué)報;2005年01期
5 張凱,錢鋒,劉漫丹;模糊神經(jīng)網(wǎng)絡(luò)技術(shù)綜述[J];信息與控制;2003年05期
6 張建剛,毛劍琴,夏天,魏可惠;模糊樹模型及其在復(fù)雜系統(tǒng)辨識中的應(yīng)用[J];自動化學(xué)報;2000年03期
7 張良杰,李衍達(dá);模糊神經(jīng)網(wǎng)絡(luò)技術(shù)的新近發(fā)展[J];信息與控制;1995年01期
相關(guān)博士學(xué)位論文 前1條
1 張永;基于解釋性與精確性的模糊建模方法研究[D];南京理工大學(xué);2006年
相關(guān)碩士學(xué)位論文 前1條
1 王瀚漓;多目標(biāo)進(jìn)化算法對模糊系統(tǒng)解釋性的研究應(yīng)用[D];浙江大學(xué);2003年
,本文編號:1835092
本文鏈接:http://sikaile.net/kejilunwen/zidonghuakongzhilunwen/1835092.html