基于事件驅(qū)動的多智能體強化學(xué)習(xí)研究
發(fā)布時間:2018-04-27 23:25
本文選題:事件驅(qū)動 + 多智能體; 參考:《智能系統(tǒng)學(xué)報》2017年01期
【摘要】:本文針對多智能體強化學(xué)習(xí)中存在的通信和計算資源消耗大等問題,提出了一種基于事件驅(qū)動的多智能體強化學(xué)習(xí)算法,側(cè)重于事件驅(qū)動在多智能體學(xué)習(xí)策略層方面的研究。在智能體與環(huán)境的交互過程中,算法基于事件驅(qū)動的思想,根據(jù)智能體觀測信息的變化率設(shè)計觸發(fā)函數(shù),使學(xué)習(xí)過程中的通信和學(xué)習(xí)時機(jī)無需實時或按周期地進(jìn)行,故在相同時間內(nèi)可以降低數(shù)據(jù)傳輸和計算次數(shù)。另外,分析了該算法的計算資源消耗,以及對算法收斂性進(jìn)行了論證。最后,仿真實驗說明了該算法可以在學(xué)習(xí)過程中減少一定的通信次數(shù)和策略遍歷次數(shù),進(jìn)而緩解了通信和計算資源消耗。
[Abstract]:Aiming at the problems of communication and computing resource consumption in multi-agent reinforcement learning, this paper proposes an event-driven multi-agent reinforcement learning algorithm, which focuses on the event-driven learning strategy layer of multi-agent learning. In the process of interaction between agent and environment, the algorithm is based on the idea of event driven, and the trigger function is designed according to the change rate of the information observed by the agent, so that the communication and learning timing in the learning process do not need to be carried out in real time or on a periodic basis. Therefore, in the same time can reduce the number of data transmission and calculation. In addition, the computational resource consumption of the algorithm is analyzed, and the convergence of the algorithm is demonstrated. Finally, the simulation results show that the algorithm can reduce the number of times of communication and the number of policy traversal in the learning process, and then reduce the consumption of communication and computing resources.
【作者單位】: 西南交通大學(xué)電氣工程學(xué)院;
【基金】:國家自然科學(xué)基金青年項目(61304166)
【分類號】:TP181
【相似文獻(xiàn)】
相關(guān)期刊論文 前10條
1 郭魯;蘇文明;;企業(yè)內(nèi)組織的多智能體論述[J];科技廣場;2008年02期
2 周,
本文編號:1812797
本文鏈接:http://sikaile.net/kejilunwen/zidonghuakongzhilunwen/1812797.html
最近更新
教材專著