Autonomous functioning via real-time monitoring and information management is an attractive ingredient in the design of any complex system. The inevitable presence of uncertainties due to malfunctions, environmental variations, ageing, and modeling errors, requires this management to be adaptive. Highlights of our adaptive approaches for high performance flight systems are outlined below:
Adaptive Control of a Quadrotor with a Mid-flight Uncertainty
The video below shows a quadrotor experiencing a break in one of the propeller blades mid-flight which in turn causes a thrust reduction and therefore a loss of control-moment. Our solution using adaptive control copes with this real-time loss by suitably managing real-time data from the quadrotor and the quadrotor physics to self-tune the control inputs.1
An Integrative Adaptive Approach to High Performance and Safety
Autonomous systems need to ensure not only high performance but meet safety constraints. This requires an integrative approach that combines adaptive control with tools such as Control Barrier Functions (CBF). The adaptive control design accommodates obstacles, control saturation, and utilizes the notion of a closed-loop reference model.2 The animations below showcase the clear advantages of using the integrative approach using a 6 degrees of freedom quadrotor model subject to a 50% loss of control effectiveness in two of the four rotors- Top Left: No Adaptive & No CBF, Top Right: Adaptive & No CBF, Bottom Left: No Adaptive & CBF, Bottom Right: Adaptive & CBF.




Top-Down perspective: Left – No Adaptive Control, Right- With Adaptive Control


The integrative approach of adaptation and safety is demonstrated using Flight Goggles, a photorealistic environment, in the animations below. The control input is assumed to have a saturation limit of 2 Newtons and that both the front and right rotors are subject to a 50% loss of control effectiveness.
Provably Correct Adaptive Controllers for Safe Formation Flight
Autonomous systems are often employed with multi-agent systems to ensure formation control. Our goal in this project was to achieve formation control with distributed multi-agent systems (MAS) together with safety and stability in the presence of parametric uncertainty in the dynamics and limited communication. Our integrative approach consisted of adaptive control, CBFs, and connected graphs. A reference model is designed so as to ensure a safe and stable formation control strategy. This is combined with a provably correct adaptive control design that includes a use of a CBF-based safety filter that suitably generates safe reference commands. Together, it is shown to lead to a guarantee of boundedness, formation control, and forward invariance.3
In the animations below, we reach a goal of a final formation without colliding with any obstacles even with a 30% loss of effectiveness in the rotors with the integrative approach (right video) but collision occurs with only CBF included (left video).


In the videos below, the left column considers static obstacles while the right includes moving obstacles. The top row considers an adaptive solution but without the CBF filter. The bottom row corresponds to the integrative approach with both adaptation and CBF filter together with EBR in the control design. The superior performance of the integrative approach is evident from these animations.




Integration of Adaptive Control and Reinforcement Learning for High Performance Flight
Adaptive control and reinforcement learning are two different methods that are both commonly employed for the control of uncertain systems. Historically, adaptive control has excelled at real-time control of systems with specific model structures through adaptive rules that learn the underlying parameters while providing strict guarantees on stability, asymptotic performance, and learning. Reinforcement learning methods are applicable to a broad class of systems and are able to produce near-optimal policies for highly complex control tasks. This is often enabled by significant offline training via simulation or the collection of large input-state datasets. A judicious combination of these two approaches can enable an understanding the fundamental relationships between adaptation, learning, and optimization. While adaptation is necessarily a concept that is based on the past and the present, optimization is focused on the future; learning is a link between these two foundational concepts. Several directions are being pursued to gain this understanding. The following are some of the highlights:
We address the problem of real-time control and learning in dynamic systems subjected to parametric uncertainties through a combination of Adaptive Control (AC) in the inner loop and a Reinforcement Learning (RL) based policy in the outer loop. This AC-RL combination allows the inner-loop AC contracts the closed-loop dynamics towards a reference system, and as the contraction takes hold, the RL in the outerloop directs the overall system towards optimal performance. 4
The videos below show the AC-RL approach applied to a quadrotor landing on a moving platform. For a 25% parametric uncertainty, we observed that the AC-RL succeeded (2nd video above) to complete the landing task 80% of the time in a time interval of 3.5seconds. In contrast, RL (1st video above) completed the task 48% of the time with an average time of about 7.5 seconds. Success here is defined not only in terms of landing on the platform but also making sure that the altitude never goes below that of the platform, and in terms of severe restrictions on lateral variables. In the video shown, for the same initial condition, the RL crashed (shown via **) while the AC-RL did not.
Shared Pilot-Autopilot Architectures for Resilient Flight
As aerial vehicles become more autonomous, and guidance and navigation systems become increasingly network-centric, there is a need to consider a swift response to the growing forms of anomalies that may occur during operation. An on-going project in our lab is the development of a shared control architecture that includes the actions of both a human pilot and an autopilot to ensure resilient tracking performance in the presence of anomalies. Autonomous model-based controllers, including model reference adaptive control, rely on model-structures, specified performance goals, and assumptions on structured uncertainties. Trained human pilots, on the other hand, are able to detect anomalous vehicle behavior which differs from their internal model but are found to have limits when attempting to rapidly learn unfamiliar and anomalous vehicle dynamics. This problem is exacerbated when the human pilot is operating the vehicle from a remote ground station. The goal is to therefore examine shared control architectures where the pilot is tasked with higher-level decision making tasks such as anomaly detection, estimation and command regulation and the autopilot is assigned lower-level tasks such as command following. A general goal here is to understand how such cyber-physical & human systems can be designed for safe and efficient performance.5
In the figure below, the last column corresponds to a shared pilot-autopilot architecture. The two vertical lines represent severe anomalies. The top two rows show the system performance. The bottom rows show the amount of control expenditure. The Capacity for Maneuver (CfM) is minimally taxed with our approach and yet results in improved performance.

- Dydek, Z.T., Annaswamy, A.M. and Lavretsky, E., 2012. Adaptive control of quadrotor UAVs: A design trade study with flight evaluations. IEEE Transactions on control systems technology, 21(4), pp.1400-1406. ↩︎
- Autenrieb, J. and Annaswamy, A., 2023, December. Safe and stable adaptive control for a class of dynamic systems. In 2023 62nd IEEE Conference on Decision and Control (CDC) (pp. 5059-5066). ↩︎
- J.A. Solano-Castellanos, P.A. Fisher, and A.M. Annaswamy, 2025. Safe and Stable Formation Control with Autonomous Multi-Agents Using Adaptive Control. arXiv preprint arXiv:2403.15674.
↩︎ - A.M. Annaswamy, A. Guha, Y. Cui, S. Tang, P.A. Fisher and J.E. Gaudio, 2023. Integration of Adaptive Control and Reinforcement Learning for Real-Time Control and Learning. IEEE Transactions on Automatic Control, vol. 68, no. 12, pp. 7740-7755. ↩︎
- E. Eraslan, Y. Yildiz and A. M. Annaswamy, “Shared Control Between Pilots and Autopilots: An Illustration of a Cyberphysical Human System,” in IEEE Control Systems Magazine, vol. 40, no. 6, pp. 77-97, Dec. 2020. ↩︎