Home > Risks Management and Insurance Magazine > Articles > AI and aviation: moving from safety to automation

AI and aviation: moving from safety to automation

05/02/2026

Artificial intelligence (AI) is beginning to see real-world applications in aviation, but further deployment requires steady progress and solid guardrails. In this interview with the European Union Aviation Safety Agency (EASA) we learn about which technologies are maturing, what risks AI can mitigate, and how the industry is preparing for the future.

EASA has spent years working to integrate AI into aviation in a safe, regulated, and responsible manner. In this interview, the agency reviews the current state of AI in the sector, its benefits for operational safety, the associated emerging risks, and the roadmap that will shape its evolution over the coming decades.

 

At this point, what are the most mature AI applications, or those closest to operational implementation, in the aviation industry?

The most advanced AI applications for aviation safety are those based primarily on performance improvements that deep learning has brought to computer vision and natural language processing. For example:

  • Computer vision: camera-based pilot assistance systems for general aviation that detect non-cooperative traffic.
  • Natural language processing: identification of misunderstandings in communications between pilots and air traffic control.

AI and machine learning can also be highly useful for surrogate modeling; that is, using a faster, approximate model that mimics a far more complex and time-intensive simulation. For instance, a surrogate model can improve the performance of runway overrun warning systems by quickly estimating overrun risk for new flight conditions without requiring a full, costly simulation.

 

What traditional aviation risks could AI help reduce? Can you share examples where real mitigation is already being observed?

IA has the potential to deliver clear safety benefits across many applications. As mentioned in the previous examples, traffic detection systems can mitigate the risk of midair collision, and runway excursion warning systems address two key risks identified in the European Aviation Safety Plan 2026.

Another critical risk is runway collision. AI-based systems using high-resolution cameras onboard aircraft, and, on the ground, can detect unauthorized vehicle or aircraft incursions onto runways, helping prevent disasters like the one that occurred at Haneda Airport in Japan in January 2024.

 

AI use will also create new operational, regulatory, and cybersecurity risks. In the medium term, what concerns the EASA the most, and how might they be mitigated?

The EASA’s AI Roadmap 2.0 was developed specifically to identify and address all emerging risks posed by AI technology in safety-related applications:

  • The most obvious risk is that AI models often don’t perform well enough. To address this, the EASA has collaborated with industry partners to study specific use cases and add “AI assurance” and “AI risk mitigation” objectives to existing safety assessment and development assurance processes.
  • Another issue is the lack of transparency in AI applications and the emergence of new ways to interact with them. This means that human factors requirements must be expanded with specific AI explainability requirements to ensure that system behavior remains clear to users.

As noted in your question, AI also introduces additional cybersecurity risks, such as data poisoning attacks, which must be accounted for in current information security guidelines.

 

One of the major challenges is having reliable, shared data in a secure environment. How would you assess the readiness to work with open standards and ensure the quality, traceability, and protection of data needed for AI?

Data governance is a crucial issue for AI and machine learning. To avoid prescribing specific data sources or collection methods, the EASA’s AI Concept Paper guidance on data management focuses on defining an operational design domain and a set of data quality requirements that collected datasets must meet. Verification activities then build on these requirements to ensure the quality necessary to meet safety objectives.

 

What commitments or initiatives has the EASA launched to ensure that civil aviation can fully harness AI’s potential while maintaining the highest standards of operational safety and environmental protection?

In February 2020, the EASA published its first AI roadmap, created to support the safe deployment of AI in aviation. This marked the beginning of an initial exploration phase that allowed the investigation of specific industry use cases through Innovation Partnership Contracts (IPCs) and research projects. The roadmap was updated in 2023 to version 2.0, in order to account for the latest technological advances, such as logic and knowledge-based AI, as well as hybrid AI (under which the already well-known large language models are classified).

Following this initial exploration phase, the EASA’s AI roadmap entered its consolidation phase in early 2024. This second phase focuses on developing rules for Level 1 AI (human assistance) and Level 2 AI (human-AI collaboration), while continuing to explore Level 3 AI (advanced automation). This work is guided by the EASA’s AI Concept Paper, (2nd edition), which reflects the program’s findings to date, based on industry partnerships and research projects.

 

Looking down the road 10 to 15 years, which advances do you expect to be the most transformative as far as AI goes? What milestones will mark this evolution?

Over the next 10 years, the EASA’s AI roadmap identifies the move toward higher levels of automation as a key challenge, whether for human-AI collaboration (Level 2B AI) or protected advanced automation (Level 3A AI). On the technology front, certifying complex hardware platforms for AI acceleration and integrating large commercial models (including open-source ones) will pose additional challenges.

Looking further ahead, 15 years and beyond, certain limitations will need to be addressed in a third phase of EASA’s AI roadmap:

  • Online learning capabilities aren’t yet certifiable and would require adaptation of the certification framework.
  • Moreover, the transition from “advanced automation” to true “autonomy,” if deemed appropriate, will require an entirely new risk mitigation strategy.
donwload pdf
Outstanding talent at Mapfre Global Risks

Outstanding talent at Mapfre Global Risks

Today we meet Jose María Baños, 23, who’s a Transport and Aviation underwriting analyst in the Operations area at Mapfre Global Risks. He recently completed a master’s in Maritime Business Management and Maritime Law, with an average grade of 9 (humble brag!). So...

read more