A game-based deep reinforcement learning approach for energy-efficient computation in MEC systems

作者:

Highlights:

摘要

Many previous energy-efficient computation optimization works for mobile edge computing (MEC) systems have been based on the assumption of synchronous offloading, where all mobile devices have the same data arrival time or calculation deadline in orthogonal frequency division multiple access (OFDMA) or time division multiple access (TDMA) systems. However, the actual offloading situations are more complex than synchronous offloading following the first-come, first-served rule. In this paper, we study a polling callback energy-saving offloading strategy, that is, the arrival time of data transmission and task processing time are asynchronous. Under the constraints of task processing time, the time-sharing MEC data transmission problem is modeled as the total energy consumption minimization model. Using the semi-closed form optimization technology, energy consumption optimization is transformed into two subproblems: computation (data partition) and transmission (time division). To reduce the computational complexity of offloading computation under time-varying channel conditions, we propose a game-learning algorithm, that combines DDQN and distributed LMST with intermediate state transition (named DDQNL-IST). DDQNL-IST combines distributed LSTM and double-Q learning as part of the approximator to improve the ability of processing and predicting time intervals and delays in time series. The proposed DDQNL-IST algorithm ensures rationality and convergence. Finally, the simulation results show that our proposed algorithm performs better than the DDQN, DQN and BCD-based optimal methods.

论文关键词:Edge computing,Game-learning,Computation offloading,Deep reinforcement learning,Energy-efficient

论文评审过程:Received 11 June 2021, Revised 26 October 2021, Accepted 27 October 2021, Available online 2 November 2021, Version of Record 9 November 2021.

论文官网地址:https://doi.org/10.1016/j.knosys.2021.107660