天堂国产午夜亚洲专区-少妇人妻综合久久蜜臀-国产成人户外露出视频在线-国产91传媒一区二区三区

當(dāng)前位置:主頁 > 經(jīng)濟(jì)論文 > 銀行論文 >

基于LeNet-5模型和門卷積神經(jīng)網(wǎng)絡(luò)的信用評(píng)分模型實(shí)證研究

發(fā)布時(shí)間:2019-05-12 15:21
【摘要】:信用評(píng)分是基于客戶信用等級(jí)的數(shù)值表達(dá)式,是用于評(píng)估和防范違約風(fēng)險(xiǎn)的有用工具,也是信用風(fēng)險(xiǎn)評(píng)估中的一種重要方法。因?yàn)轱L(fēng)險(xiǎn)具有客觀存在性和控制艱巨性,為了減小信用風(fēng)險(xiǎn)事件帶來的損失,需要選取合適的信用評(píng)分模型來進(jìn)行信用風(fēng)險(xiǎn)控制。建立信用評(píng)分的方法有很多,本論文對(duì)信用評(píng)分模型的研究現(xiàn)狀進(jìn)行了文獻(xiàn)綜述。對(duì)常用的評(píng)分方法包括線性回歸、判別分析、貝葉斯網(wǎng)絡(luò)、邏輯回歸、模糊邏輯、決策樹、支持向量基、遺傳算法、神經(jīng)網(wǎng)絡(luò)等。其中,神經(jīng)網(wǎng)絡(luò)模型有著更強(qiáng)的非線性處理能力,能提高信用評(píng)分模型的精度,其次本論文考慮構(gòu)造一個(gè)新的神經(jīng)網(wǎng)絡(luò)信用評(píng)分模型。卷積神經(jīng)網(wǎng)絡(luò)(CNN)是一個(gè)多層的神經(jīng)網(wǎng)絡(luò),通過在相鄰層的神經(jīng)元間加入卷積運(yùn)算,來提取數(shù)據(jù)的特征。卷積神經(jīng)網(wǎng)絡(luò)有諸多類型,其中LeNet-5卷積神經(jīng)網(wǎng)絡(luò)是經(jīng)典的卷積神經(jīng)網(wǎng)絡(luò)。LeNet-5具有權(quán)值共享和最大池化的特征,其識(shí)別效果顯著。其網(wǎng)絡(luò)結(jié)構(gòu)中卷積層與最大池化層交替堆棧是LeNet-5網(wǎng)絡(luò)的核心。門卷積網(wǎng)絡(luò)(GCNN)是由Facebook人工智能實(shí)驗(yàn)室提出的深度學(xué)習(xí)模型,在訓(xùn)練方面表現(xiàn)出優(yōu)異的性能。卷積神經(jīng)網(wǎng)絡(luò)通過卷積核權(quán)值共享,對(duì)輸入進(jìn)行整體建模。遞歸神經(jīng)網(wǎng)絡(luò)(RNN)是帶有循環(huán)的神經(jīng)網(wǎng)絡(luò),將過去的信息可以保留在系統(tǒng)中。網(wǎng)絡(luò)具有記憶功能,能保留之前計(jì)算過的一些信息,并用在之后的計(jì)算中,但標(biāo)準(zhǔn)的遞歸神經(jīng)網(wǎng)絡(luò)無法對(duì)長(zhǎng)期依賴進(jìn)行學(xué)習(xí)。而長(zhǎng)期記憶網(wǎng)絡(luò)(LSTM)是一種特殊的遞歸神經(jīng)網(wǎng)絡(luò),能夠避免長(zhǎng)期依賴問題。所有的遞歸神經(jīng)網(wǎng)絡(luò)都有鏈狀的重復(fù)神經(jīng)網(wǎng)絡(luò)模塊。LSTM有四個(gè)以特殊方式交互的神經(jīng)網(wǎng)絡(luò)層來取代單一的神經(jīng)網(wǎng)絡(luò)層。再運(yùn)用門控機(jī)制對(duì)信息進(jìn)行選擇。門卷積網(wǎng)絡(luò)模型通過將引入LSTM的門控機(jī)制引入到卷積神經(jīng)網(wǎng)絡(luò)中,實(shí)現(xiàn)對(duì)輸入通過由局部到整體的共享權(quán)重而進(jìn)行的全局建模,并用門機(jī)制對(duì)信息進(jìn)行識(shí)別和判斷,取得了良好的效果。本文的主要工作包括:以GCNN在卷積神經(jīng)網(wǎng)絡(luò)中加入輸入門控制信息的思想為基礎(chǔ),參照CNN的經(jīng)典模型LeNet-5的結(jié)構(gòu),將GCNN與LeNet-5模型相結(jié)合,并對(duì)層結(jié)構(gòu)特征進(jìn)行改進(jìn),發(fā)揮兩個(gè)模型的優(yōu)勢(shì),提高神經(jīng)網(wǎng)絡(luò)的優(yōu)化能力,結(jié)合個(gè)人信用風(fēng)險(xiǎn)的特點(diǎn),構(gòu)造新的信用評(píng)分模型。采用知名P2P機(jī)構(gòu)的借款人用戶信息作為樣本,對(duì)基于GCNN和LeNet-5的新模型進(jìn)行訓(xùn)練和測(cè)試,用多分類支持向量基作為對(duì)比實(shí)驗(yàn),實(shí)驗(yàn)結(jié)果表明該模型實(shí)證方面具有良好的效果。
[Abstract]:Credit score is a numerical expression based on customer credit rating. It is a useful tool for evaluating and preventing default risk, and it is also an important method in credit risk assessment. Because the risk has the objective existence and the control difficulty, in order to reduce the loss caused by the credit risk event, it is necessary to select the appropriate credit score model to carry on the credit risk control. There are many methods to establish credit scoring. In this paper, the research status of credit scoring model is reviewed. The commonly used scoring methods include linear regression, discriminant analysis, Bayesian network, logical regression, fuzzy logic, decision tree, support vector basis, genetic algorithm, neural network and so on. Among them, the neural network model has stronger nonlinear processing ability and can improve the accuracy of the credit scoring model. Secondly, a new neural network credit scoring model is considered in this paper. Convolution neural network (CNN) is a multi-layer neural network. By adding convolution operation between neurons in adjacent layers, the characteristics of data are extracted. There are many types of convolution neural networks, among which LeNet-5 convolution neural network is a classical convolution neural network. Lenet-5 has the characteristics of weight sharing and maximum pooling, and its recognition effect is remarkable. The alternating stack of convolution layer and maximum pool layer in its network structure is the core of LeNet-5 network. Gate convolution network (GCNN) is a deep learning model proposed by Facebook artificial intelligence laboratory, which shows excellent performance in training. The convolution neural network shares the weight of the convolution kernel to model the input as a whole. Recurrent neural network (RNN) is a neural network with cycle, which can keep the past information in the system. The network has the function of memory and can retain some of the previously calculated information and be used in the subsequent calculation, but the standard recurrent neural network can not learn from the long-term dependence. Long-term memory network (LSTM) is a special recurrent neural network, which can avoid the problem of long-term dependence. All recurrent neural networks have chain repetitive neural network modules. LSTM has four neural network layers that interact in a special way to replace a single neural network layer. Then the gate control mechanism is used to select the information. By introducing the gate control mechanism introduced into the convolution neural network, the gate convolution network model realizes the global modeling of the input through the shared weight from the local to the whole, and uses the gate mechanism to identify and judge the information. Good results have been achieved. The main work of this paper is as follows: based on the idea that GCNN adds input gate control information to convolution neural network, referring to the structure of CNN's classical model LeNet-5, GCNN and LeNet-5 model are combined, and the layer structure characteristics are improved. Give full play to the advantages of the two models, improve the optimization ability of neural network, combined with the characteristics of personal credit risk, a new credit scoring model is constructed. The borrower user information of well-known P2P institutions is used as a sample to train and test the new model based on GCNN and LeNet-5, and the multi-classification support vector base is used as a comparative experiment. The experimental results show that the model has a good empirical effect.
【學(xué)位授予單位】:深圳大學(xué)
【學(xué)位級(jí)別】:碩士
【學(xué)位授予年份】:2017
【分類號(hào)】:F832.4

【參考文獻(xiàn)】

相關(guān)期刊論文 前1條

1 龐素琳;概率神經(jīng)網(wǎng)絡(luò)信用評(píng)價(jià)模型及預(yù)警研究[J];系統(tǒng)工程理論與實(shí)踐;2005年05期



本文編號(hào):2475481

資料下載
論文發(fā)表

本文鏈接:http://sikaile.net/jingjilunwen/huobiyinxinglunwen/2475481.html


Copyright(c)文論論文網(wǎng)All Rights Reserved | 網(wǎng)站地圖 |

版權(quán)申明:資料由用戶994a5***提供,本站僅收錄摘要或目錄,作者需要?jiǎng)h除請(qǐng)E-mail郵箱bigeng88@qq.com