基于雙線性函數(shù)注意力Bi-LSTM模型的機(jī)器閱讀理解
發(fā)布時(shí)間:2018-04-08 15:14
本文選題:深度學(xué)習(xí) 切入點(diǎn):機(jī)器閱讀理解 出處:《計(jì)算機(jī)科學(xué)》2017年S1期
【摘要】:近年來,隨著深度學(xué)習(xí)(Deep Learning)在機(jī)器閱讀理解(Machine Reading Comprehension)領(lǐng)域的廣泛應(yīng)用,機(jī)器閱讀理解迅速發(fā)展。針對機(jī)器閱讀理解中的語義理解和推理,提出一種雙線性函數(shù)注意力(Attention)雙向長短記憶網(wǎng)絡(luò)(Bi directional-Long Short-Term Memory)模型,較好地完成了在機(jī)器閱讀理解中抽取文章、問題、問題候選答案的語義并給出了正確答案的任務(wù)。將其應(yīng)用到四六級(CET-4,CET-6)聽力文本上測試,測試結(jié)果顯示,以單詞為單位的按序輸入比以句子為單位的按序輸入準(zhǔn)確率高2%左右;此外,在基本的模型之上加入多層注意力轉(zhuǎn)移的推理結(jié)構(gòu)后準(zhǔn)確率提升了8%左右。
[Abstract]:In recent years, with the extensive application of deep learning in machine Reading comprehension, machine reading comprehension has developed rapidly.Aiming at semantic understanding and reasoning in machine reading, a bi-directional long and short memory network (Bi directional-Long Short-Term memory) model with bilinear function attentiveness is proposed, which can be used to extract articles from machine reading comprehension.The semantics of the question candidate answer and the task of giving the correct answer.It was applied to CET-4 / CET-6) listening text test. The results showed that the accuracy of sequential input in word units was about 2% higher than that in sentence units.The accuracy is improved by about 8% after adding the reasoning structure of multi-layer attention shift to the basic model.
【作者單位】: 中國人民解放軍理工大學(xué);
【分類號】:TP18;TP391.1
,
本文編號:1722162
本文鏈接:http://sikaile.net/kejilunwen/zidonghuakongzhilunwen/1722162.html
最近更新
教材專著