multilayer feedforward 在 數(shù)學(xué) 分類中 的翻譯結(jié)果
本文關(guān)鍵詞:一種多層前饋網(wǎng)絡(luò)計(jì)量經(jīng)濟(jì)建模方法,由筆耕文化傳播整理發(fā)布。
在分類學(xué)科中查詢 所有學(xué)科 數(shù)學(xué) 自動(dòng)化技術(shù) 計(jì)算機(jī)軟件及計(jì)算機(jī)應(yīng)用 電力工業(yè) 航空航天科學(xué)與工程 宏觀經(jīng)濟(jì)管理與可持續(xù)發(fā)展 工業(yè)通用技術(shù)及設(shè)備 地球物理學(xué) 建筑科學(xué)與工程 歷史查詢
Sensitivity Analysis of Econometric Model Using Multilayer Feedforward Network 基于多層前饋網(wǎng)絡(luò)的計(jì)量經(jīng)濟(jì)模型敏感性分析方法 短句來(lái)源 Multilayer feedforward network approach for econometric modeling 一種多層前饋網(wǎng)絡(luò)計(jì)量經(jīng)濟(jì)建模方法 短句來(lái)源 Application of Multilayer Feedforward Neural Network in Customer Loss 多層前饋神經(jīng)網(wǎng)絡(luò)在客戶流失分析中的應(yīng)用 短句來(lái)源 Econometric modeling using multilayer feedforward networkmethod is employed to fit historical data to approximate the underlyingproduction function, solving the problem of selecting the form of productionfunction. 其中,利用多層前饋網(wǎng)絡(luò)計(jì)量經(jīng)濟(jì)建模方法,擬合歷史數(shù)據(jù),逼近潛在的生產(chǎn)函數(shù),解決了如何選擇生產(chǎn)函數(shù)形式的問(wèn)題; 短句來(lái)源 The optimal control problem for the FCC process is solved using the MFNN (multilayer feedforward neural network) for system recognition and modeling, periodogram analysis for model testing, and the advanced Frank Wolfe algorithm for steady state optimization computation. Steady stable state data was used from the production process at Dagang Oil Refinery Works to train and test the neural network. The results prove that the neural network is effective for the system recognition, modeling and stable state optimal control of complex non linear production processes. 為解決催化裂化過(guò)程的優(yōu)化控制問(wèn)題 ,采用多層前饋神經(jīng)網(wǎng)絡(luò)進(jìn)行辨識(shí)、建模 ,用周期圖檢驗(yàn)法對(duì)模型檢驗(yàn) ,用改進(jìn)的 Frank- Wolfe算法進(jìn)行穩(wěn)態(tài)優(yōu)化計(jì)算 ,并以大港煉油廠實(shí)際生產(chǎn)過(guò)程的穩(wěn)態(tài)數(shù)據(jù)進(jìn)行試驗(yàn)和驗(yàn)證 ,說(shuō)明神經(jīng)網(wǎng)絡(luò)適合于解決非線性復(fù)雜生產(chǎn)過(guò)程的辨識(shí)、建模和穩(wěn)態(tài)優(yōu)化控制問(wèn)題 短句來(lái)源 更多 Multilayer feedforward neural networks have been widely used in many applications, for which the back-propagation (BP) is the most popular training algorithm. 多層感知器前傳神經(jīng)網(wǎng)絡(luò)已廣泛應(yīng)用于很多領(lǐng)域,其中BP算法是應(yīng)用最普遍的訓(xùn)練算法。 短句來(lái)源 The input representation of multilayer feedforward neural networks is very important and has been thoroughly studied, while the output representation is hardly considered. 輸入表示對(duì)BP網(wǎng)絡(luò)解決分類問(wèn)題時(shí)的性能非常重要,很多研究者在這方面作了不少的工作,而對(duì)于輸出表示的研究卻很少。 短句來(lái)源 The approximation ability of multilayer feedforward neural networks is studied. The problem of choosing input stimulating signals and the effect of increasing the number of hidden layers and the number of nets in each layer on the approximation precision of nonlinear functions by multilayer feed forward BP networks are discussed. 對(duì)多層前向神經(jīng)網(wǎng)絡(luò)的函數(shù)逼近能力進(jìn)行了研究,討論了用多層前向BP網(wǎng)絡(luò)來(lái)逼近非線性函數(shù)時(shí),輸入激勵(lì)信號(hào)的選擇和增加隱層層數(shù)和每層神經(jīng)元個(gè)數(shù)對(duì)逼近精度的影響。 短句來(lái)源 A fast learning algorithm for multilayer feedforward perception based on the Kalman filter theory is presented. 提出基于卡爾曼濾波前向多層感知器快速學(xué)習(xí)算法,對(duì)此算法進(jìn)行了詳細(xì)的推證。 短句來(lái)源 A systematic analysis is given to those main respects related to the global optimization of multilayer feedforward networks, some fundamental conditions which need be met by any algorithm of global optimization are presented, a practicable algorithm of global optimization is suggested and a theoretical proof of the reasonableness and convergence of the present algorithm is presented. 在對(duì)多層前向網(wǎng)絡(luò)全局最優(yōu)化問(wèn)題所涉及的幾個(gè)主要方面進(jìn)行深入剖析的基礎(chǔ)上,給出了全局最優(yōu)化算法應(yīng)具備的基本條件和一種算法格式,對(duì)所給算法格式的收斂性做了理論證明。 短句來(lái)源
查詢“multilayer feedforward”譯詞為用戶自定義的雙語(yǔ)例句
我想查看譯文中含有:的雙語(yǔ)例句
為了更好的幫助您理解掌握查詢?cè)~或其譯詞在地道英語(yǔ)中的實(shí)際用法,我們?yōu)槟鷾?zhǔn)備了出自英文原文的大量英語(yǔ)例句,供您參考。
multilayer feedforward
The simulation results have shown that multilayer feedforward neural network models with two hidden layers provide sufficiently accurate prediction of the concentration profile of the process.
Both the recall and the learning phases of the multilayer feedforward with backpropagation ANN model are considered.
This approach utilizes a multilayer feedforward neural network to compensate for model uncertainty associated with the robotic operation.
This approach utilizes a multilayer feedforward neural network to compensate for model uncertainty associated with the robotic operation.
The HSOM is shown to form arbitrarily complex clusters, in analogy with multilayer feedforward networks.
更多
The approximation ability of multilayer feedforward neural networks is studied. The problem of choosing input stimulating signals and the effect of increasing the number of hidden layers and the number of nets in each layer on the approximation precision of nonlinear functions by multilayer feed forward BP networks are discussed. Under constraints of limited hidden lapers and the number of nets in each layer the learning speed of neural networks and the approximation precsision are improved by using...
The approximation ability of multilayer feedforward neural networks is studied. The problem of choosing input stimulating signals and the effect of increasing the number of hidden layers and the number of nets in each layer on the approximation precision of nonlinear functions by multilayer feed forward BP networks are discussed. Under constraints of limited hidden lapers and the number of nets in each layer the learning speed of neural networks and the approximation precsision are improved by using the prior knowledge of approxunated functions and adding one layer before the hidden layers Simulation results are given to verify the above statements.
對(duì)多層前向神經(jīng)網(wǎng)絡(luò)的函數(shù)逼近能力進(jìn)行了研究,討論了用多層前向BP網(wǎng)絡(luò)來(lái)逼近非線性函數(shù)時(shí),輸入激勵(lì)信號(hào)的選擇和增加隱層層數(shù)和每層神經(jīng)元個(gè)數(shù)對(duì)逼近精度的影響。為了在隱層層數(shù)、每層神經(jīng)元個(gè)數(shù)有限的情況下,加快網(wǎng)絡(luò)學(xué)習(xí)速度,改善逼近效果,本文提出了利用對(duì)被逼近函數(shù)的先驗(yàn)知識(shí),在隱層前加一函數(shù)層的思想,并通過(guò)仿真證明了其有效性。
There is a set of one-input nonlinear affine systems which can be linearized by state and input transformation of the form (x)and u=α(x)+β(x)v.In this paper, a multilayer feedforward neural network is used to realize. the state transformation and then the system is stabilized by the implementation of Liyapunov method. It is shown through simulation that this method is feasible.
在用狀態(tài)方程描述的單輸入仿射非線性系統(tǒng)中存在一類可以通過(guò)非線性狀態(tài)變換(x)和輸入變換u=α(x)+β(x)v實(shí)現(xiàn)輸入──狀態(tài)線性化的系統(tǒng),用多層前饋神經(jīng)網(wǎng)絡(luò)通過(guò)實(shí)時(shí)學(xué)習(xí)實(shí)現(xiàn)此狀態(tài)變換,并在此基礎(chǔ)上用李亞普諾夫方法設(shè)計(jì)進(jìn)行系統(tǒng)鎮(zhèn)定的反饋控制器,仿真表明,學(xué)習(xí)在很短時(shí)間內(nèi)可收斂,且系統(tǒng)對(duì)外干擾和未建模動(dòng)態(tài)有一定魯棒性。
A New fast learning algorithm for training multilayer feedforward neural networks by using variable time--varying forgetting factor technique and U-D factorization based fading memory extended Kalman filter is proposed in this paper. In comparison with BP and ex tended Kalman filter (EKF ) based learning algorithm, the new algorithm can not only obviously improve the convergency rate,numerical stability. but also provide much more accurate learning results in fewer iterations with fewer hidden nodes. In...
A New fast learning algorithm for training multilayer feedforward neural networks by using variable time--varying forgetting factor technique and U-D factorization based fading memory extended Kalman filter is proposed in this paper. In comparison with BP and ex tended Kalman filter (EKF ) based learning algorithm, the new algorithm can not only obviously improve the convergency rate,numerical stability. but also provide much more accurate learning results in fewer iterations with fewer hidden nodes. In addition, it is less affected by the choice of initial weights and initial covariance matrix as well as other setup parameters. The results of simulated computations of nonlinear dynamic system modelling and identification applications show that the new algorithm proposed here is an effective and efficient learning algorithm for feedforward neural networks.
本文針對(duì)前饋神經(jīng)網(wǎng)絡(luò)BP算法所存在的收斂速度慢區(qū)常遇局部極小值等缺陷,提出一種基于U-D分解的漸消記憶推廣卡爾曼濾波學(xué)習(xí)新算法.與BP和EKF學(xué)習(xí)算法相比,,新算法不僅大大加快了學(xué)習(xí)收斂速度、數(shù)值穩(wěn)定性好,而且需較少的學(xué)習(xí)次數(shù)和隱節(jié)點(diǎn)數(shù)即可達(dá)到更好的學(xué)習(xí)效果,對(duì)初始權(quán)值,初始方差陣等參數(shù)的選取不敏感,便于工程應(yīng)用.非線性系統(tǒng)建模與辨識(shí)的仿真計(jì)算表明,該算法是提高網(wǎng)絡(luò)學(xué)習(xí)速度、改善學(xué)習(xí)效果的一種非常有效的方法.
 
<< 更多相關(guān)文摘
相關(guān)查詢
CNKI小工具
在英文學(xué)術(shù)搜索中查有關(guān)multilayer feedforward的內(nèi)容
在知識(shí)搜索中查有關(guān)multilayer feedforward的內(nèi)容
在數(shù)字搜索中查有關(guān)multilayer feedforward的內(nèi)容
在概念知識(shí)元中查有關(guān)multilayer feedforward的內(nèi)容
在學(xué)術(shù)趨勢(shì)中查有關(guān)multilayer feedforward的內(nèi)容
本文關(guān)鍵詞:一種多層前饋網(wǎng)絡(luò)計(jì)量經(jīng)濟(jì)建模方法,由筆耕文化傳播整理發(fā)布。
本文編號(hào):169679
本文鏈接:http://sikaile.net/jingjilunwen/jiliangjingjilunwen/169679.html