Evaluating Reinforcement Learning-Based Neural Controllers for Quadcopter Navigation in Windy Conditions
Published in Engineering Applications of Artificial Intelligence, 2025
Full paper can be found here
Abstract: Accurate quadcopter navigation under windy conditions remains challenging for traditional control methods, especially in the presence of unpredictable wind gusts and strict navigational constraints. This paper evaluates Deep Reinforcement Learning (DRL) based controllers under such conditions, analysing the impact of wind domain randomisation, multi-goal training, enhanced state representations with explicit wind information, and the use of temporal data to capture affecting dynamics over time.
Experiments in the AirSim simulator across four trajectories—evaluated under both no-wind and windy conditions— demonstrate that DRL-based controllers outperform classical methods, particularly under stochastic wind disturbances. Moreover, we show that training a DRL agent with domain randomization improves robustness against wind but reduces efficiency in no-wind scenarios. However, incorporating wind information into the agent’s state space enhances robustness without sacrificing performance in wind-free settings. Furthermore, training with stricter waypoint constraints emerges as the most effective strategy, leading to precise trajectories and improved generalization to wind disturbances.To further interpret the learned policies, we apply Shapley Additive explanations analysis, revealing how different training configurations influence the agent’s feature importance. These findings underscore the potential of DRL-based neural controllers for resilient autonomous aerial systems, highlighting the importance of structured training strategies, informed state representations, and explainability for real-world deployment.