多規(guī)則有序決策的粗糙集模型與熵方法
本文選題:有序決策 + 粗糙集。 參考:《電子科技大學(xué)》2017年博士論文
【摘要】:隨著網(wǎng)絡(luò)信息技術(shù)的發(fā)展和普及,互聯(lián)網(wǎng)已經(jīng)發(fā)展成為當(dāng)今世界上資料最多、門類最全、規(guī)模最大的異構(gòu)、動態(tài)和開放的分布式資源庫。有序性數(shù)據(jù)是互聯(lián)網(wǎng)中廣泛存在的一類數(shù)據(jù),如倉儲與物流、生態(tài)農(nóng)業(yè)、投資風(fēng)險分析等。多規(guī)則有序決策已成為Web信息知識發(fā)現(xiàn)中非常重要的研究方向。粗糙集理論采用;徒频幕舅枷雭砜坍嫹诸悊栴}中的不一致性,是解決不確定分類問題的有效工具。信息熵是信息不確定性的重要度量工具。本文基于粗糙計算方法論中粒化和近似的思想,結(jié)合信息熵對不確定性的度量能力,對多規(guī)則有序列決策問題進(jìn)行了深入研究,建立了多規(guī)則有序決策的粗糙集模型和信息熵方法。具體從以下幾個方面進(jìn)行了探索:第一,建立了多規(guī)則有序決策的多粒度偏好關(guān)系粗糙計算模型,設(shè)計了粒結(jié)構(gòu)選擇算法。傳統(tǒng)的偏好(優(yōu)勢)關(guān)系是一種不嚴(yán)格的偏好表示方法,論文將傳統(tǒng)的偏好關(guān)系拓展到嚴(yán)格的偏好關(guān)系,改進(jìn)了偏好關(guān)系粗糙集模型,并將其擴展到了多規(guī)則有序決策領(lǐng)域,建立了多粒度偏好關(guān)系粗糙集模型。在現(xiàn)在偏好關(guān)系粗糙集中,如果一個樣本要屬于某偏好集的下近似,則要求所有比此樣本差的樣本都包含在該偏好集中,這樣的下近似是沒有任何意義的。改進(jìn)后的偏好關(guān)系粗糙集模型克服了這一問題。如果一個樣本要屬于某偏好集的下近似,只要存在比該樣本差的樣本屬于此偏好集就可以了,這更加符合實際情況?紤]到不同規(guī)則的費用和成本問題,還建立了費用敏感的多粒度偏好關(guān)系粗糙集模型。此外,還將建立的模型應(yīng)用于粒結(jié)構(gòu)選擇,并設(shè)計了粒結(jié)構(gòu)選擇算法。第二,建立了多規(guī)則有序決策的多粒度模糊偏好關(guān)系粗糙計算模型,設(shè)計了偏好決策和樣本壓縮方法。經(jīng)典的偏好關(guān)系粗糙集基于傳統(tǒng)的偏好關(guān)系,只能表示數(shù)據(jù)之間的序關(guān)系,不能體現(xiàn)數(shù)據(jù)之間偏好的程度。針對現(xiàn)有模糊偏好關(guān)系粗糙集模型在上下近似方面與傳統(tǒng)粗糙集思想相悖的情況,引入加性一致的模糊偏好關(guān)系,提出了改進(jìn)的模糊偏好關(guān)系粗糙集模型,并將其擴展到了多規(guī)則有序決策領(lǐng)域,建立了多粒度模糊偏好關(guān)系粗糙集模型和費用敏感的多粒度模糊偏好關(guān)系粗糙集模型。基于提出的模型,設(shè)計了偏好決策和樣本壓縮算法。第三,提出了偏好不一致熵的概念,建立了多規(guī)則有序決策的信息熵模型,設(shè)計了屬性約簡算法和樣本壓縮算法。將香農(nóng)信息熵擴展到有序決策領(lǐng)域,用偏好不一致熵來度量有序決策系統(tǒng)中偏好的不一致性和不確定性。偏好不一致熵是基于屬性的,能夠有效度量有序決策系統(tǒng)中由于條件屬性與決策的偏好不一致導(dǎo)致的決策不確定性,能夠很好地反應(yīng)條件屬性在有序決策中的重要程度,在特征選擇和屬性約簡方面有比較好的效果。第四,定義了樣本的偏好不一致熵,并擴展到加權(quán)的偏好不一致熵,提出了樣本的偏好決策算法。基于屬性的偏好不一致熵在樣本的有序決策方面能力不足。針對偏好不一致有序系統(tǒng)中的樣本決策問題,基于偏好信息粒子和樣本的偏好不一致度,定義了樣本的偏好不一致熵。樣本的偏好不一致熵關(guān)注的對象是樣本,能夠度量特定樣本引起的偏好不一致,在有序決策系統(tǒng)中的樣本分類方面具有較好的效果。當(dāng)基于全局偏好不一致熵進(jìn)行分類時,能夠得到與原始決策比較接近的結(jié)果。第五,提出了一種基于粗糙集的最近鄰樣本壓縮方法(FRSC算法)。最近鄰分類規(guī)則的時間復(fù)雜度和空間復(fù)雜度均與訓(xùn)練樣本集的樣本數(shù)量密切相關(guān)。隨著樣本數(shù)量的增加,所需要的時間和空間迅速增大。而在最近鄰分類規(guī)則中,決定分類結(jié)果的往往是處于決策邊界的樣本。因此計算訓(xùn)練集的一致子集是提高最近鄰分類規(guī)則效率的重要途經(jīng)。粗糙集理論是通過上近似和下近似來對決策空間進(jìn)行逼近,處于決策邊界區(qū)域的樣本往往都是下近似比較小的樣本。將粗糙集方法應(yīng)用于最近鄰規(guī)則的訓(xùn)練集壓縮,是一種快速計算訓(xùn)練集一致子集有效方法。與最近鄰規(guī)則類似,在粗糙集方法中,計算上近似和下近似的時間也隨訓(xùn)練樣本數(shù)量呈指數(shù)增長。而決定上近似和下近似的同樣也是決策邊界的樣本。因此本文提出的最近鄰樣本壓縮方法對粗糙集的應(yīng)用同樣具有重要意義,可以有效提高粗糙計算效率。本文從粗糙集和信息熵兩個角度對多規(guī)則有序決策問題進(jìn)行了研究,建立了多規(guī)則有序決策的多粒度偏好關(guān)系粗糙計算模型和模糊偏好關(guān)系粗糙計算模型,定義了偏好不一致熵和樣本的偏好不一致熵,形成了解決多規(guī)則有序決策問題的粗糙集和信息熵理論。
[Abstract]:With the development and popularization of the network information technology, the Internet has developed into a distributed resource base with the most data, the most complete categories and the largest scale in the world. The ordered data is a kind of data widely existing in the Internet, such as storage and material flow, ecological agriculture, investment risk analysis and so on. Decision making has become a very important research direction in Web information discovery. Rough set theory uses granular and approximate basic ideas to describe inconsistencies in classification problems. It is an effective tool for solving uncertain classification problems. Information entropy is an important measure tool for information uncertainty. This paper is based on the granulation of rough computing methodology. And approximate ideas, combined with the measurement of uncertainty by information entropy, the problem of multi rule and sequential decision making is studied deeply, and the rough set model and information entropy method for multi rule and orderly decision making are established. First, the multi granularity preference relation of multi rule ordered decision making is rough. The traditional preference (dominance) relationship is a non strict preference representation method. The traditional preference relation is extended to the strict preference relation, and the preference relation rough set model is improved, and it is extended to the domain of multi rule ordered decision making, and a multi granularity preference relation rough set is established. If a sample is to belong to the lower approximation of a preference set, it is required that all samples that are inferior to this sample are included in the preference set, so the lower approximation is not meaningful. The improved preference relation rough set model overcomes this problem. If a sample is to belong to a certain deviation. The lower approximation of a good set, as long as there is a sample of the difference of the sample, belongs to the preference set, which is more in line with the actual situation. Considering the cost and cost of different rules, a cost sensitive multi granularity preference relation rough set model is also established. In addition, the model is applied to the selection of grain structure and the grain structure is designed. Selection algorithm. Second, a multi granularity fuzzy preference relation rough computing model for multi rule and orderly decision making is established. The preference decision and sample compression method are designed. The classical preference relation rough set is based on the traditional preference relation, which can only express the order relation between data and the degree of preference between the data. In the case that the rough set model is inconsistent with the traditional rough set idea in the upper and lower approximation, we introduce an additive consistent fuzzy preference relationship and propose an improved fuzzy preference relation rough set model, and extend it to the multi rule ordered decision-making domain, and establish a multi granularity paste preference relational rough set model and the cost sensitive multiple. Based on the proposed model, the preference decision and sample compression algorithm are designed based on the proposed model. Third, the concept of preference inentropy is proposed, the information entropy model of the multi rule ordered decision is established, and the attribute reduction algorithm and the sample compression algorithm are designed. The Shannon information entropy is extended to the ordered decision-making field, and the bias is extended to the field of order decision. Unconformance entropy is used to measure the inconsistency and uncertainty of preference in order decision systems. Preference inentropy is based on attributes, and can effectively measure the decision uncertainty caused by the disagreement between the conditional attribute and the decision preference in an ordered decision system, and it can well counter the importance of the condition attribute in the orderly decision making. There are good results in feature selection and attribute reduction. Fourth, the preference inentropy of the sample is defined, and the weighted preference inconsistency entropy is extended. The preference decision algorithm of sample is proposed. The attribute based preference inentropy is insufficient in the order decision of the sample. This decision problem, based on the preference inconsistency of preference information particles and samples, defines the preference inconsistency of the sample. The object of the sample's preference inentropy attention is a sample, can measure the preference inconsistency caused by a specific sample, and has a better effect in the classification of samples in an ordered decision system. When the inconsistency entropy is classified, the results are close to the original decision. Fifth, a new nearest neighbor sample compression method based on rough sets (FRSC algorithm) is proposed. The time complexity and space complexity of the nearest neighbor classification rule are closely related to the number of samples in the training sample set. The time and space will increase rapidly. In the nearest neighbor classification rule, the classification result is often the sample in the decision boundary. Therefore, the calculation of the uniform subset of the training set is an important way to improve the efficiency of the nearest neighbor classification rule. The sample in the boundary area is often a small lower sample. The rough set method is applied to the training set compression of the nearest neighbor rule. It is an effective method to quickly calculate the set of the training set. Similar to the nearest neighbor rule, the approximate and near near similar time in the rough set method also increases exponentially with the number of training samples. The approximate and lower approximation are also the same as the sample of the decision boundary. Therefore, the nearest neighbor sample compression method proposed in this paper is also of great significance to the application of rough sets, which can effectively improve the efficiency of rough computing. This paper studies the multi rule ordered decision-making problem from two angles of rough set and information entropy. The rough computing model of multi granularity preference relation and the rough calculation model of fuzzy preference relation are used for regular order decision making, and the preference inentropy and the preference inentropy of preference inentropy and sample are defined, and the rough set and information entropy theory for solving the problem of multi rule and order decision are formed.
【學(xué)位授予單位】:電子科技大學(xué)
【學(xué)位級別】:博士
【學(xué)位授予年份】:2017
【分類號】:TP18
【參考文獻(xiàn)】
相關(guān)期刊論文 前10條
1 陳建凱;王熙照;高相輝;;改進(jìn)的基于排序熵的有序決策樹算法[J];模式識別與人工智能;2014年02期
2 湯建國;汪江樺;佘X;祝峰;;不同覆蓋產(chǎn)生相同覆蓋近似集的條件研究[J];南京大學(xué)學(xué)報(自然科學(xué));2014年01期
3 湯建國;汪江樺;韓莉英;祝峰;;基于覆蓋粗糙集的語言動力系統(tǒng)[J];智能系統(tǒng)學(xué)報;2014年02期
4 湯建國;佘X;祝峰;;一種新的覆蓋粗糙模糊集模型[J];控制與決策;2012年11期
5 劉盾;李天瑞;李華雄;;區(qū)間決策粗糙集[J];計算機科學(xué);2012年07期
6 湯建國;佘X;祝峰;;覆蓋Vague集[J];計算機科學(xué);2012年01期
7 ;Covering-Based Soft Rough Sets[J];Journal of Electronic Science and Technology;2011年02期
8 劉盾;姚一豫;李天瑞;;三枝決策粗糙集[J];計算機科學(xué);2011年01期
9 馮林;李天瑞;余志強;;連續(xù)值屬性決策表中的可變精度粗糙集模型及屬性約簡[J];計算機科學(xué);2010年09期
10 湯建國;祝峰;佘X;陳文;;粗糙集與其他軟計算理論結(jié)合情況研究綜述[J];計算機應(yīng)用研究;2010年07期
相關(guān)博士學(xué)位論文 前2條
1 李少勇;有序決策系統(tǒng)的知識更新理論及其高效算法[D];西南交通大學(xué);2014年
2 胡清華;混合數(shù)據(jù)知識發(fā)現(xiàn)的粗糙計算模型和算法[D];哈爾濱工業(yè)大學(xué);2008年
相關(guān)碩士學(xué)位論文 前1條
1 楊孝勇;基于熵的粒度計算及其應(yīng)用研究[D];四川師范大學(xué);2012年
,本文編號:1915301
本文鏈接:http://sikaile.net/guanlilunwen/wuliuguanlilunwen/1915301.html