深度卷積神經(jīng)網(wǎng)絡(luò)的數(shù)據(jù)表示方法分析與實(shí)踐
發(fā)布時(shí)間:2018-01-11 02:28
本文關(guān)鍵詞:深度卷積神經(jīng)網(wǎng)絡(luò)的數(shù)據(jù)表示方法分析與實(shí)踐 出處:《計(jì)算機(jī)研究與發(fā)展》2017年06期 論文類型:期刊論文
更多相關(guān)文章: 深度卷積神經(jīng)網(wǎng)絡(luò) 數(shù)據(jù)表示方式 浮點(diǎn)數(shù)據(jù)表示 定點(diǎn)數(shù)據(jù)表示 卷積操作優(yōu)化
【摘要】:深度卷積神經(jīng)網(wǎng)絡(luò)在多個(gè)領(lǐng)域展現(xiàn)了不凡的性能,并被廣泛應(yīng)用.隨著網(wǎng)絡(luò)深度的增加和網(wǎng)絡(luò)結(jié)構(gòu)不斷復(fù)雜化,計(jì)算資源和存儲(chǔ)資源的需求也在不斷攀升.專用硬件可以很好地解決對(duì)計(jì)算和存儲(chǔ)的雙重需求,在低功耗同時(shí)滿足較高的計(jì)算性能,從而應(yīng)用在一些無法使用通用CPU和GPU的場景中.在專用硬件設(shè)計(jì)過程中仍存在著很多亟待解決的問題,例如選擇何種數(shù)據(jù)表示方法、如何平衡數(shù)據(jù)表示精度與硬件實(shí)現(xiàn)代價(jià)等.為解決上述問題,針對(duì)定點(diǎn)數(shù)和浮點(diǎn)數(shù)建立誤差分析模型,從理論角度分析如何選擇表示精度及選擇結(jié)果對(duì)網(wǎng)絡(luò)準(zhǔn)確率的影響,并通過實(shí)驗(yàn)探究不同數(shù)據(jù)表示方法對(duì)硬件實(shí)現(xiàn)代價(jià)的影響.通過理論分析和實(shí)驗(yàn)驗(yàn)證可知,在一般情況下,滿足同等精度要求時(shí)浮點(diǎn)表示方法在硬件實(shí)現(xiàn)開銷上占有一定優(yōu)勢.除此之外,還根據(jù)浮點(diǎn)表示特征對(duì)神經(jīng)網(wǎng)絡(luò)中卷積操作進(jìn)行了硬件實(shí)現(xiàn),與定點(diǎn)數(shù)相比在功耗和面積上分別降低92.9%和77.2%.
[Abstract]:The depth of the convolutional neural network show the extraordinary performance in many fields, and has been widely used. With the increase of the depth of the network and the network structure is more complex, computing resources and storage resources demand is also rising. Dedicated hardware can solve the dual demand of computing and storage, in low power consumption while satisfying calculation high performance, which is used in some cannot use the general CPU and GPU in the scene. In the special hardware design process, there are still many problems to be solved, such as the choice of data representation, accuracy and hardware cost. How to balance the data representation to solve the above problems, the fixed-point and floating-point error analysis of model selection from the perspective of theoretical analysis how to express the accuracy and influence the accuracy of network selection results, and different data representation method of hardware experiment research Effect of implementation cost. Through theoretical analysis and experimental verification shows that, in general, to meet the same requirements of precision floating-point representation method has some advantages in hardware cost. In addition, also features of the hardware implementation of convolutional neural network in operation according to the floating point, compared with the fixed point number in power and area were reduced by 92.9% and 77.2%.
【作者單位】: 清華大學(xué)計(jì)算機(jī)科學(xué)與技術(shù)系;清華信息科學(xué)與技術(shù)國家實(shí)驗(yàn)室(籌);
【基金】:國家自然科學(xué)基金項(xiàng)目(61373025) 國家重點(diǎn)研發(fā)計(jì)劃項(xiàng)目(2016YFB1000303)~~
【分類號(hào)】:TP183
【正文快照】: This work was supported by the National Natural Science Foundation of China(61373025)and the National Key Research andDevelopment Program of China(2016YFB1000303).(wpq14@mails.tsinghua.edu.cn)卷積神經(jīng)網(wǎng)絡(luò)(convolution neural network,CNN)因?yàn)槠涓邷?zhǔn)確率,廣
【相似文獻(xiàn)】
相關(guān)期刊論文 前1條
1 郭利銳;黃冬梅;張弛;;海洋臺(tái)站不確定性數(shù)據(jù)表示方法的研究[J];計(jì)算機(jī)應(yīng)用與軟件;2012年07期
,本文編號(hào):1407789
本文鏈接:http://sikaile.net/kejilunwen/zidonghuakongzhilunwen/1407789.html
最近更新
教材專著