Optimising Horizons in Model Predictive Control for Motion Cueing Algorithms Using Reinforcement Learning

Al-serri, Sari, Chalak Qazani, Mohamad Reza, Mohamed, Shady, Arogbonlo, Adetokunbo, Al-ashmori, Mohammed, Lim, Chee Peng, Nahavandi, Saeid, and Asadi, Houshyar (2024) Optimising Horizons in Model Predictive Control for Motion Cueing Algorithms Using Reinforcement Learning. In: Proceedings of the IEEE International Conference on Systems, Man, and Cybernetics. pp. 2793-2800. From: SMC 2024: IEEE International Conference on Systems, Man, and Cybernetics, 6-10 October 2024, Kuching, Malaysia.

[img] PDF (Published Version) - Published Version
Restricted to Repository staff only

View at Publisher Website: https://doi.org/10.1109/SMC54092.2024.10...


Abstract

This paper explores the application of driving simulator across multiple sectors, highlighting the challenges associated with refining motion cueing algorithms (MCA) through model predictive control (MPC). Through these platforms, drivers can simulate the sensation of motion. The implementation of MPC-based MCA, while advantageous for its precision in controlling motion simulations, encounters significant hurdles such as the requirement for highly accurate system models and the extensive parameter tuning needed for each specific control scenario. These issues create a critical gap in achieving optimal simulation fidelity and efficiency with lower computational time, necessitating a novel approach to improve the MCA domain. Addressing these challenges, the study pioneers the use of Deep QNetwork (DQN), a reinforcement learning (RL) technique, to optimise the horizons of MPC within the MCA domain. This innovation is significant as it introduces, for the first time, a method to dynamically adjust MPC-based MCA horizons using DQN, which learns through continuous interaction with the simulation environment. This approach is set to overcome the limitations of traditional meta-heuristic optimisation methods, such as the Grasshopper Optimisation Algorithms (GOA) and Butterfly Optimisation Algorithms (BOA), by offering a more flexible and adaptable solution. The overarching goal of this research is to minimise the system's cost function by maximising a reward function that encompasses key performance metrics such as specific force sensation, angular velocity, linear displacement, linear velocity, and angular displacement. By integrating DQN into the MPC-based MCA environment, this study demonstrates a faster computational running time and improves the precision and efficiency of the simulations. This innovative approach enhances the efficiency of the horizon determination process, showcasing promising implications for the MCA domain's advancement.

Item ID: 87033
Item Type: Conference Item (Research - E1)
ISBN: 978-1-6654-1020-5
Copyright Information: © 2024 IEEE.
Date Deposited: 04 Sep 2025 00:58
FoR Codes: 40 ENGINEERING > 4007 Control engineering, mechatronics and robotics > 400705 Control engineering @ 30%
40 ENGINEERING > 4007 Control engineering, mechatronics and robotics > 400706 Field robotics @ 20%
46 INFORMATION AND COMPUTING SCIENCES > 4611 Machine learning > 461105 Reinforcement learning @ 50%
SEO Codes: 28 EXPANDING KNOWLEDGE > 2801 Expanding knowledge > 280110 Expanding knowledge in engineering @ 100%
More Statistics

Actions (Repository Staff Only)

Item Control Page Item Control Page