Browse by author
Lookup NU author(s): Peng Zhang, Dr Jie ZhangORCiD, Dr Yang Long, Dr Bingzhang Hu
Full text for this publication is not currently held within this repository. Alternative links are provided below where available.
Batch processes are significant and essential manufacturing route for the agile manufacturing of high value addedproducts and they are typically difficult to control because of unknown disturbances, model plant mismatches, and highly nonlinear characteristic. Traditional one-step reinforcement learning and neural network have been applied to optimize and control batch processes. However, traditional one-step reinforcement learning and the neural network lack accuracy and robustness leading to unsatisfactory performance. To overcome these issues and difficulties, a modified multi-step action Q-learning algorithm (MMSA) based on multiple step action Q-learning (MSA) is proposed in this paper. For MSA, the action space is divided into some periods of same time steps and the same action is explored with fixed greedy policy being applied continuously during a period. Compared with MSA, the modification of MMSA is that the exploration and selection of action will follow an improved and various greedy policy in the whole system time which can improve the flexibility and speed of the learning algorithm. The proposed algorithm is applied to a highly nonlinear batch process and it is shown giving better control performance than the traditional one-step reinforcement learning and MSA.
Author(s): Zhang P, Zhang J, Long Y, Hu B
Publication type: Conference Proceedings (inc. Abstract)
Publication status: Published
Conference Name: 24th International Conference on Methods and Models in Automation and Robotics (MMAR2019)
Year of Conference: 2019
Pages: 360-365
Online publication date: 14/10/2019
Acceptance date: 20/05/2019
ISSN: 9781728109336
Publisher: IEEE
URL: https://doi.org/10.1109/MMAR.2019.8864632
DOI: 10.1109/MMAR.2019.8864632